blog.JanWiersma.com

Lower software dev cost? No!

The software development landscape is on the cusp of a revolution. Large Language Models (LLMs) promise to streamline workflows, automate repetitive tasks, and even generate code. This translates to exciting possibilities: cheaper development and faster time-to- market, as seen in the recent advancements showcased at Microsoft Build 2024’s keynote this week. GitHub Copilot and Copilot Workspace are prime examples of how LLMs are being leveraged to empower developers.

But here’s the Jevons Paradox lurking in the shadows, and its impact might be even more significant with LLMs. Remember the paradox? As coal became cheaper and more abundant, its use skyrocketed, ultimately leading to a greater (not smaller!) energy demand.

Imagine applying this to software. With LLMs lowering the barrier to entry, custom solutions become not just possible, but expected. Think niche functionalities tailored to individual workflows, features that were previously cost-prohibitive.

Jevons on Steroids: Here’s where LLMs amplify the effect. Because LLMs can adapt and learn at an incredible pace, user expectations will likely accelerate. They’ll not only demand custom solutions, but also expect them to evolve rapidly alongside their changing needs.

The Challenge: Keeping Up with a Moving Target

* Customers demanding more: Lower costs will fuel the fire for rapidly adaptable features, including options for niche functionalities specific to individual workflows.
* The pressure to keep up: This surge in demand will necessitate continuous investment in LLM training data, development tools, and most importantly, upskilling our workforce. Developers will need to learn to effectively utilize and guide LLMs to meet these ever-evolving needs. Stagnation means losing customers to competitors who can adapt faster.

LLMs will undoubtedly make development more efficient. And it will be cheaper as directly compared to the current speed of execution and customer expectations. But the key takeaway is this: the cost savings won’t be a one-time win. To thrive in this new paradigm, we need to embrace a culture of continuous improvement and invest in keeping our LLM-powered development tools on the cutting edge, alongside a skilled workforce ready to leverage their power.

Share

The Decentralized Web: Reclaiming Our Power

Remember the Wild West days of the internet? Unfettered innovation, boundless potential, and a sense of control over your online experience. Today, it feels increasingly like a battleground. Governments tighten their grip with regulations, while corporations gobble up our data and influence every click. This is the future the book ‘The Sovereign Individual’ warned about – a future where individuals have little control over their digital lives. But it doesn’t have to be this way.

The Decentralized Web: Reclaiming Our Power

There’s a growing movement advocating for a fundamental shift: a decentralized web powered by cutting-edge technologies. This isn’t just about technology; it’s about reclaiming power and building a future YOU control.

Technical Innovation Leading the Charge:
*Nostr: Imagine a social media platform where no single entity dictates the rules. Nostr, built on a censorship-resistant protocol, empowers you to own your data and experiences.
*Web5: This next-generation web leverages blockchain technology, giving you greater ownership over your data and online interactions, aligning perfectly with Self-Sovereign Identity (SSI) principles. With SSI, you control your digital identity, deciding who can access your information.
*AI: Decentralized AI systems can analyze data on a distributed network, enhancing security, transparency, and user experience without compromising individual privacy.
*Cryptography: Protocols like Nostr utilize cryptography to ensure secure communication and data ownership, making it harder for entities to exploit your personal information.
*Blockchain: This foundational technology underpins many decentralized solutions, providing a secure and transparent way to store and manage data.

Why Decentralization Matters:
*Empowering Individuals: A decentralized web puts you in the driver’s seat. You control your data, choose who can access it, and have a say in how online platforms operate.
*Reduced Centralized Control: Decentralization weakens the grip of powerful entities, fostering a more equitable digital landscape where innovation and competition thrive.
*Enhanced Privacy: Decentralized systems make it harder for governments and corporations to collect and exploit your personal information.

The Road Ahead: Shaping Our Digital Future

Building a truly decentralized web won’t be easy, but the potential benefits are immense. We have the technology; now we need the action. Let’s embrace these innovations and build a future where the internet empowers individuals, not corporations or governments. This is the future we should aim for, a future where Self-Sovereign Identity and decentralized technologies are the norm.

One may think this is all just about technology, but it isn’t – it’s about the kind of future we want to create. What kind of digital world do you envision for yourself and your loved ones?

Share

Goodbye SDL

img_3207

After 3 very dynamic years, I’m leaving SDL today. It has been a great journey and I enjoyed every minute of it. Anyone who has followed SDL in the last 9 months has seen a lot of changes announced; divestment of 3 business units, new CEO, new CTO,…

While I personally think these changes are good for the company and it will bring focus and stability going forward, I also decided I wasn’t going to be part of that future anymore.

With this in mind, I shifted my focus in the last few months on helping to find a good home for the divesting business units. It provided me with the option to slowly step away from my day-to-day responsibilities without disrupting it too much.

During the hand-over period, you automatically get confronted with what you are going to leave behind. <cue music> Don’t Know What You Got (Till It’s Gone) </cue music> and the saddest thing to leave behind are actually my teams & peers.

Continue with reading

Share

Applying firefighter tactics to (IT) leadership

1-hsowjrjlMrNDh6Cjpod3JwThis week I will be celebrating my 15th year of active volunteer firefighter duty. As you naturally tend to do when celebrating milestones like these, is to reflect on the past years and learnings.
One thing that specifically stood out are moments in my IT leadership career, where I applied firefighter techniques and skills, I picked up over the years.
Most of them revolve around problem solving and how to get the most out of teams. While there is an obvious link between firefighters and solving issues in a high pressure or crisis situation, I did learn the same tactics also apply to any challenge I was confronted with.
When firefighters arrive at the scene of a fire, they always follow the same protocol;
-Assess the situation
-Locate fire
-Identify & control flow path
-Extinguish the fire
-Reset & evaluate
In business and especially at higher leadership levels some problems may seem very daunting, creating anxiety and leave you with the feeling of being overwhelmed. Firefighters are used to stepping in to highly unknown situations with confidence and as such a protocol like above helps to, step by step, gain control of the situation.

Continue with reading

Share

The future of datacenter build & co-lo (or CIO’s are getting out of the datacenter business – Part2)

Last year my friend Tim Crawford wrote an excellent article on why CIOs should get out of the datacenter business.  Tim focused on how current big cooperates are moving away from building, owning or renting datacenter facilities in favour of consuming IT at higher levels of the stack.

DataCenterCloudSpectrum

As he focused on the migration of leading big companies, it leaves the question; what about the future Fortune 500 companies?

Continue with reading

Share

Code of Conduct

As a ‘code of conduct’ seems to be needed now a days for interactions between people, especially at tech conferences, I releasing my own ‘code of conduct’.

The following applies when you interact with me, listen to my talks or see any of my rants on social media;

1. Respect & integrity. I will treat you with respect by default, please extend the same courtesy to me. I have a strong views on certain issues that maybe completely the opposite of your view.

2. Acknowledge my culture. I’m Dutch. I’m direct, blunt and we founded ‘going Dutch‘. I acknowledge the fact that you may have an other cultural background and therefore a different view of the world around us.

3. If you don’t like what I’m saying or how I’m acting, let me know. Or walk away. If you don’t confront me and just complain behind my back, you take away my ability to learn. There some good guidelines for delivering feedback to someone. You may want to read it someday, if you want your feedback to resonate.

4. Confidentiality ; by default I will keep any information you provide to me confidential. You can share anything I told you with anyone, unless I specifically tell you the information I’m sharing is confidential.

5. I’m even more blunt on social media en during delivering keynotes. Just unfollow me, if you can’t handle that. See the disclaimer: https://www.janwiersma.com/?page_id=160

 

Share

Our Trust issue with Cloud Computing

Recently SDL (my employer) did a survey on customer ‘trust’ for the marketer.B0AgpWXIAAA5mZm

Being in the IT space I tend to deal a lot with ‘trust’ the last few years. Being responsible for the Cloud services delivery for my companies SAAS & hosted products we deal with clients evaluating and buying our services. My teams also evaluate & consume IAAS/PAAS/SAAS services in the market, on which we build our services.

The ‘trust’ issue in consuming Cloud services is an interesting one. IAAS platforms like Amazon abstract complexity away from its user. It is easy to consume. The same goes for SAAS services like Box.com of Gmail; the user has no clue what happens behind the scenes. Most business users don’t care about the abstraction of that complexity. It just works….

It’s the IT people that seems to have the biggest issue with gracefully losing control and surrendering data, applications, etc., to someone else. Control is often an emotional issue we are often unprepared to deal with. It leaves us with a feeling ‘they can’t take care of it as good as I can…’ Specifically IT people know how complex IT can be, and how hard it can be to deliver the guarantees that the business is looking for. For many years we have tried to manage the rising complexity of IT within the business with tools and processes, never completely able to satisfy the business as we where either to expensive or not hitting our SLA’s. Continue with reading

Share

German companies ask for Internet border patrol.

border_0_0In the last year multiple companies started serving German customers out of Germany based datacenter locations.

There seems to be a specifically strong sentiment around security & privacy with German companies after the Edward Sonwden leaks. The kneejerk reaction is to mandate that servers should sit within German borders, as that would take any security & privacy concern away. Cloud providers are now starting to follow this customer demand.

Interestingly this reaction is more sentiment driven as there is no legal ground to request this. Especially as more and more German companies are putting this in place as a default policy, regardless of what type of data (privacy sensitive or not…)

Looking at the Federal Data Protection Act (Bundesdatenschutzgesetz in German) (“BDSG”) it states that certain transfer of data (like personal data) outside of the EU needs to be reported and approved and Data controllers must take appropriate technical and organizational measures against unauthorized or unlawful processing and against accidental loss or destruction of, or damage to, personal data. Nothing says servers need to be in Germany.

Looking at other EU countries, Germany seems to be the only country where organizations express such behavior. The only next inline could be Switzerland.

Continue with reading

Share

Google’s BMS got hacked. Is your datacenter BMS next ?

<English cross post with my DCP blog>

A recent USA Congressional survey stated that power companies are targeted by cyber attacks 10.000x per month.hacked-scada

After the 2010 discovery of the Stuxnet virus the North American Electric Reliability Corporation (NERC) established both mandatory standards and voluntary measures to protect against such cyber attacks, but most utility providers haven’t implemented NERC’s voluntary recommendations.

Stuxnet hit the (IT) newspaper front-pages around September 2010, when Symantec announced the discovery. It represented one of the most advanced and sophisticated viruses ever found. One that targeted specific PLC devices in nuclear facilities in Iran:

Stuxnet is a threat that was primarily written to target an industrial control system or set of similar systems. Industrial control systems are used in gas pipelines and power plants. Its final goal is to reprogram industrial control systems (ICS) by modifying code on programmable logic controllers (PLCs) to make them work in a manner the attacker intended and to hide those changes from the operator of the equipment.

DatacenterKnowledge picked up on it in 2011, asking ‘is your datacenter ready for stuxnet?’

After this article the datacenter industry didn’t seem to worry much about the subject. Most of us deemed the chance of being hacked with a highly sophisticated virus ,attacking our specific PLC’s or facility controls, very low.

Recently security company Cylance published the results of a successful hack attempt on a BMS system located at a Google office building. This successful hack attempt shows a far greater threat for our datacenter control systems.

 

The road towards TCP/IP

The last few years the world of BMS & SCADA systems radically changed. The old (legacy) systems consisted of vendor specific protocols, specific hardware and separate networks. Modern day SCADA networks consist of normal PC’s and servers that communicate through IT standard protocols like IP, and share networks with normal IT services.

IT standards have also invaded facility equipment: The modern day UPS and CRAC is by default equipped with an onboard webserver able of send warning using an other IT standard: SNMP.

The move towards IT standards and TCP/IP networks has provided us with many advantages:

  • Convenience: you are now able to manage your facility systems with your iPad or just a web browser. You can even enable remote access using Internet for your maintenance provider. Just connect the system to your Internet service provider, network or Wi-Fi and you are all set. You don’t even need to have the IT guys involved…
  • Optimize: you are now able to do cross-system data collection so you can monitor and optimize your systems. Preferably in an integrated way so you can have a birds-eye view of the status of your complete datacenter and automate the interaction between systems.

Many of us end-users have pushed the facility equipment vendors towards this IT enabled world and this has blurred the boundary between IT networks and BMS/SCADA networks.

In the past the complexity of protocols like Bacnet and Modbus, that tie everything together, scared most hackers away. We all relied on ‘security through obscurity’ , but modern SCADA networks no longer provide this (false) sense of security.

Moving towards modern SCADA.

The transition towards modern SCADA networks and systems is approached in many different ways. Some vendors implemented embedded Linux systems on facility equipment. Others consolidate and connected legacy systems & networks on standard Windows or Linux servers acting as gateways.

This transition has not been easy for most BMS and SCADA vendors. A quick round among my datacenter peers provides the following stories:

  • BMS vendors installing old OS’s (Windos/Linux) versions because the BMS application doesn’t support the updated ones.
  • BMS vendors advising against OS updates (security, bug fix or end-of-support) because it will break their BMS application.
  • BMS vendors unable to provide details on what ports to enable on firewalls; ‘ just open all ports and it will work’.
  • Facility equipment vendors without software update policies.
  • Facility equipment vendors without bug fix deployment mechanisms; having to update dozens of facility systems manually.

And these stories all apply to modern day, currently used, BMS&SCADA systems.

Vulnerability patching.

Older versions of the SNMP protocol have known several vulnerabilities that affected almost every platform, included Windows/Linux/Unix/VMS, that supported the SNMP implementation.

It’s not uncommon to find these old SNMP implementations still operational in facility equipment. With the lack of software update policies, that also include the underlying (embedded) OS, new security vulnerabilities will also be neglected by most vendors.

The OS implementation from most BMS vendors also isn’t hardened against cyber attacks. Default ports are left open, default accounts are still enabled.

This is all great news for most hackers. It’s much easer for them to attack a standard OS like a Windows or Linux server. There are lots of tools available to make the life of the hacker easer and he doesn’t have to learn complex protocols like Modbus or Bacnet. This is by far the best attack surface in modern day facility system environments.

The introduction of DCIM software will move us even more from the legacy SCADA towards an integrated & IT enabled datacenter facility world. You will definitely want to have your ‘birds-eye DCIM view’ of your datacenter anywhere you go, so it will need to be accessible and connected. All DCIM solutions run on mainstream OS’s, and most of them come with IT industry standard databases. Those configurations provide an other excellent attack surface, if not managed properly.

ISO 27001

Some might say: ‘I’m fully covered because I got an ISO 27001 certificate’.

The scope of ISO27001 audit and certificate is set by the organization pursuing the certification. For most datacenter facilities the scope is limited to the physical security (like access control, CCTV) and its processes and procedures. IT systems and IT security measures are excluded because those are part of the IT domain and not facilities. So don’t assume that BMS and SCADA systems are included in most ISO 27001 certified datacenter installations.

Natural evolution

Most of the security and management issues are a normal part of the transition in to a larger scale, connected IT world for facility systems.

The same lack of awareness on security, patching, managing and hardening of systems has been seen by the IT industry 10-15 year ago. The move from a central mainframe world to decentralized servers and networks, combined with the introduction of the Internet has forced IT administrators to focus on managing the security of their systems.

In the past I have heard Facility departments complain that IT guys should involve them more because IT didn’t understand power and cooling. With the introduction of a more software enabled datacenter the Facility guys now need to do the same and get IT more involved; they have dealt with all of this before…

Examples of what to do:

  • Separate your systems and divide the network. Your facility system should not share its network with other (office) IT services. The separate networks can be connected using firewalls or other gateways to enable information exchange.
  • Assess your real needs: not everything needs to be connected to the Internet. If facility systems can’t be hardened by the vendor or your own IT department, then don’t connect them to the Internet. Use firewalls and Intrusion Detection Systems (IDS) to secure your system if you do connect them to the Internet.
  • Involve your IT security staff. Have facilities and IT work together on implementing and maintaining your BMS/SCADA/DCIM systems.
  • Create awareness by urging your facility equipment vendor or DCIM vendor to provide a software update & security policy.
  • Include the facility-systems in the ISO 27001 scope for policies and certification.
  • Make arrangements with your BMS and/or DCIM vendor about management of the underlying OS and its management. Preferably this is handled by your internal IT guys who already should know everything about patching IT systems and hardening them. If the vendor provides you with an appliance, then the vendor needs to manage the patching process and hardening of the system.

If you would like to talk about the future of securing datacenter BMS/SCADA/DCIM systems than join me at Observe Hack Make (OHM) 2013. IOHM is a five-day outdoor international camping festival for hackers and makers, and those with an inquisitive mind. Starts July 31st 2013.

Note:
There are really good whitepapers on IDS systems (and firewalls) for securing Modbus and Bacnet protocols, if you do need to connect those networks to the internet. Example: Snort IDS for SCADA (pdf) or  books about SCADA & security at Amazon.

Source:
A large part of this blog is based on a Dutch article on BMS/SCADA security January 2012 by Jan Wiersma & Jeroen Aijtink (CISSP). The Dutch IT Security Association (PViB) nominated this article for ‘best security article of 2012’.

Share

Where is the open datacenter facility API ?

<English cross post with my DCP blog>

For some time the Datacenter Pulse top 10 has featured an item called ‘ Converged Infrastructure Intelligence‘. The 2012 presentation mentioned:stack21-forceX

Treat the DC infrastructure as an IT system;
– Converge in the infrastructure instrumentation and control systems
– Connect it into the IT systems for ultimate control
Standardize connections and protocols to connect components

With datacenter infrastructure becoming a more complex system and the need for better efficiency within the whole datacenter stack, the need arises to integrate layers of the stack and make them ‘talk’ to each other.

This is shown in the DCP Stack framework with the need for ‘integrated control systems’; going up from the (facility) real-estate layer to the (IT) platform layer.

So if we have the ‘integrated control systems’, what would we be able to do?

We could:

  • Influence behavior (can’t control what you don’t know); application developers can be given insight on their power usage when they write code for example. This is one of the needed steps for more energy efficient application programming. It will also provide more insight of the complete energy flow and more detailed measurements.
  • Design for lower level TIER datacenters; when failure is imminent, IT systems can be triggered to move workloads to other datacenter locations. This can be triggered by signals from the facility equipment to the IT systems.
  • Design close control cooling systems that trigger on real CPU and memory temperature and not on room level temperature sensors. This could eliminate hot spots and focus the cooling energy consumption on the spots where it is really needed. It could even make the cooling system aware of oncoming throttle up from IT systems.
  • Optimize datacenters for smart grid. The increase of sustainable power sources like wind and solar energy, increases the need for more flexibility in energy consumption. Some may think this is only the case when you introduce onsite sustainable power generation, but the energy market will be affected by the general availability of sustainable power sources also. In the end the ability to be flexible will lead to lower energy prices. Real supply and demand management in the datacenters requires integrated information and control from the facility layers and IT layers of the stack.

Gap between IT and facility does not only exists between IT and facility staff but also between their information systems. Closing the gap between people and systems will make the datacenter more efficient, more reliable and opens up a whole new world of possibilities.

This all leads to something that has been on my wish list for a long, long time: the datacenter facility API (Application programming interface)

I’m aware that we have BMS systems supporting open protocols like BACnet, LonWorks and Modbus, and that is great. But they are not ‘IT ready’. I know some BMS systems support integration using XML and SOAP but that is not based on a generic ‘open standard framework’ for datacenter facilities.

So what does this API need to be ?

First it needs to be an ‘open standard’ framework; publicly available and no rights restrictions for the usage of the API framework.

This will avoid vendor lock-in. History has shown us, especially in the area of SCADA and BMS systems, that our vendors come up with many great new proprietary technologies. While I understand that the development of new technology takes time and a great deal of money, locking me in to your specific system is not acceptable anymore.

A vendor proprietary system in the co-lo and wholesale facility will lead to the lock-in of co-lo customers. This is great for the co-lo datacenter owner, but not for its customer. Datacenter owners, operators and users need to be able to move between facilities and systems.

Every vendor that uses the API framework needs to use the same routines, data structures, object classes. Standardized. And yes, I used the word ‘Standardized’. So it’s a framework we all need to agree up on.

These two sentences are the big difference between what is already available and what we actually need. It should not matter if you place your IT systems in your own datacenter or with co-lo provider X, Y, Z. The API will provide the same information structure and layout anywhere…

(While it would be good to have the BMS market disrupted by open source development, having an open standard does not mean all the surrounding software needs to be open source. Open standard does not equal open source and vice versa.)

It needs to be IT ready. An IT application developer needs to be able to talk to the API just like he would to any other IT application API; so no strange facility protocols. Talk IP. Talk SOAP or better: REST. Talk something that is easy to understand and implement for the modern day application developer.

All this openness and ease of use may be scary for vendors and even end users because many SCADA and BMS systems are famous for relying on ‘security through obscurity’. All the facility specific protocols are notoriously hard to understand and program against. So if you don’t want to lose this false sense of security as a vendor; give us a ‘read only’ API. I would be very happy with only this first step…

So what information should this API be able to feed ?

Most information would be nice to have in near real time :

  • Temperature at rack level
  • Temperature outside of the building
  • kWh, but other energy related would be nice at rack level
  • warnings / alarms at rack and facility level
  • kWh price (can be pulled from the energy market, but that doesn’t include the full datacenter kWh price (like a PUE markup))

(all if and where applicable and available)

The information owner would need features like access control for rack level information exchange and be able to tweak the real time features; we don’t want to create unmanageable information streams; in security, volume and amount.

So what do you think the API should look like? What information exchange should it provide? And more importantly; who should lead the effort to create the framework? Or… do you believe the Physical Datacenter API framework is already here?

More:

Good API design by Google : http://www.youtube.com/watch?v=heh4OeB9A-c&feature=gv

Share