Sunday, December 31, 2017

Alfidi Capital at Cloud and IoT Expo 2017

There is an endless parade of cloud computing action in Silicon Valley. I attended Sys-Con's Cloud Expo and IoT Expo 2017 down in Santa Clara to check out the latest action. They also had some Big Data and DevOps stuff going on at this show. It's easy to get all of these cloud conferences mixed up if we don't identify the primary organizers. It's even easier to understand cloud action once you read my Alfidi Capital blog articles. I do in fact have a badge selfie, as you can see below.

Alfidi Capital at Cloud and IoT Expo 2017.

The major cloud providers all publish global maps of their service availability zones. Their data center locations may be proprietary information, so I will let interested readers peruse those maps on their own. Cloud users should know how a cloud provider will support their data classification and workload types. AI processing of enormous data volumes will drive cloud growth, and so will blockchain adoption. "Serverless" is a new cloud buzzword often appearing with "containerization" in a buzzword combo.

One speaker took the stage in an inflatable dinosaur costume to tell us about microservices. It was a cute stunt. Microservices decompose a tech's scope into single-function modules, reducing a system's complexity by minimizing communications between people. Expect to see this "microservices" buzzword to join "containers" because it sounds so cool. Kubernetes fans can use the Istio open source framework along with whatever toolchain they need; someone will likely be impressed.

All of this talk about microservices made me ponder how I would use them in a real project. API governance is a new challenge for microservices frameworks. A bigger challenge is to model workflows that cross different microservice architectures. I searched the Web for examples of these models; there are quite a few. Code Project may have something useful. The Gartner Hype Cycle for Emerging Technologies 2017 identifies machine learning (ML) very close to the peak of the current cycle. I need an open source ML process or AI engine I can use to improve an app or bot, provided I could control the input of proprietary data. The process needs to handle classification, clustering, and regression without problems. Alternatively, I could use open source training data sets, similar to the business data I need to process. Perhaps Uncle Sam's Data.gov has both training and real data that would suit my analytical goals.

Ask yourselves what "serverless" truly means, given the chronic underutilization of on-premise data centers. Serverless is not just another cloud function. It should remove developers from routine things and given them true DevOps freedom. Seek the cloud's wisdom if you speak the correct language. I will not take a serverless expert seriously if they think ICO "digital credit" as a virtual asset is something the cloud can leverage.

Cloud operators are talking more about the EU's GDPR and the need to do a gap analysis to become compliant. It's a healthy development for anyone who respects data privacy. Outsourcing GDPR compliance services is now a growing cottage industry in the cloud sector. Operators also need cool metrics like "idea to cash" and mean time to repair (MTTR) to impress their financial auditors.

The cloud cannot exist separately from the real-world IoT systems it will manage. Manufacturing plants are info-synched to adjust operations in real time, so the physical plant must be designed to incorporate software and sensors for data capture. I foresee massive security vulnerabilities if manufacturers allow their supply chain vendors to have real time visibility into their live factory operations. Such easy access allows hackers and viruses easy penetration to an entire vertical. The cloud can make ordinary ICS/SCADA vulnerabilities even worse.

Cloud success increasingly means using AI. Imagine AI governing BRMS with the ability to adjust principal rules. A human designer must be in the loop to ensure the AI does not get out of control. If the Singularity happens, the most likely origin will be some AI governing a major cloud provider that has access to the AIs of enterprise clients running their own BRMS through public clouds. The self-aware computer apocalypse is the worst-case scenario of AIs leveraging other AIs. The Standard Performance Evaluation Corporation (SPEC) should codify standards that will prevent this AI nightmare.

Choosing a cloud provider has strategic implications for a business. All configurations (data center, container, serverless) lead to "vendor lock" where a customer is permanently tied to one cloud provider. This is the exact definition of a switching cost in a sector where the biggest players have a durable competitive advantage. It's why data centers are becoming just like railroads and pipelines. Cloud vendor lock is a switching cost for the customer and a competitive advantage for the provider. Spell my name correctly when you quote me on that point.

DevOps people belong in the cloud. Read the Puppet and DORA State of DevOps Report 2017 for assessments of where DevOps is going. There are also plenty of DevOps handbooks and white papers on the Web for additional guidance. I know how to solve the tech culture problem of which developer cult is best for the cloud sector. Design a "Project X" and have different teams (DevOps, agile/lean, waterfall) work to solve it on a fixed budget. Ready, set, go. May the best team win. There will always be some expert who thinks forcing teams to compete is bad, and who can cite research on development teams to support that conclusion. My point is that they don't need to compete internally, so the competition should be confined to conferences and hackathons where teams can demonstrate their skills.

Containerization is the future of the cloud. Composable infrastructures should theoretically lead to the elimination of data junkyards. We will see how fast this elimination occurs if the cloud continues to trend away from virtualization and towards containerization. Latency is the single most important business criteria determining resources directed to containerization. Power management is the single most important limiting factor in data center optimization.

The app development ecosystem has massively diversified and has become dependent upon object assembly from multiple open source code bases. Optimizing apps with AIs will be a big thing. Open source APIs must be flexible enough to accommodate microservices designed for either containers or virtual machines. Each microservice must perform one function only, and must only communicate with other functions via well-developed APIs; otherwise the microservice will not work well with containers.

Plan out cloud use thoroughly. Assess the opportunity cost of not using cloud for some project. The cost avoidance is justification for cloud adoption. Any business decision in a large enterprise that's not justified with data will come down to politics. Design thinking is useful in enterprise architecture, with every potential choice having its own cost estimate.

Scrum is one variation of the Agile Manifesto. Making agile work in verticals with declining economics means firing a lot of senior people who stand in the way of a pivot that will keep the organization alive. The dinosaurs won't take risks. The old 80/20 rule applies, so the worst 20% of employees just won't grok agile. Once they find less demanding jobs at less pay, they may have an epiphany that they need to acquire new skills and attitudes. The cloud will disintermediate servers from their users.

I am very impressed with emerging attempts to determine the economic value of data (EVD). The accounting approach to EVD is wrong because accounting uses historical cost only. Economics uses a future value for EVD because data value persists into the future as its multiplier effect cascades throughout a network. I like a quote I heard at the conference from a data science practitioner working on EVD: "Data is the new sun, not the new oil." Petroleum is a deleting asset. The sun never wears out, just like data persists. That one metaphor made my entire conference attendance worthwhile. Upon reflection, I will add that EVD models must account for incorrect or obsolete data that must eliminated and no longer adds value. Data's persistence does not make it immune from depreciation.

Edge computing makes IoT smarter. The MQTT protocol is multi-cloud connective tissue where more than one cloud overlaps with remote devices. Edge analytics turns BI into behavioral understanding; expect it to use data lake dumping (heads up, Hadoop fans). Common cloud architecture uses MQTT to push data from analytics engines (assigned to collect from IoT device categories) into the cloud. The whole point of edge computing is to reduce the volume of data going to the cloud. It economizes on traffic by sending only analytics (a compression of data) or patterns (even further compression).

I covered a lot of intellectual ground at this Cloud / IoT Expo. The tech expertise routinely concentrated into the Santa Clara Convention Center is one of the wonders of Silicon Valley. The conference gave me even more inspiration for some tech ideas that I really need to execute. I may even showcase my concepts at next year's Cloud / IoT Expo. The sky is truly the limit in the cloud.