Saturday, October 25, 2008

Amazon cloud computing goes fast

Right now, I have my own (virtual) server running in the Amazon data center. Getting such an Amazon server running has really become very easy. With Elasticfox, a plug in for Firefox, everything can be configured in a trivial and user friendly way. No more need to use command line tools or write your web service calls yourself. Just follow the Getting Starting Guide.

July of this year, I read the book "Programming Amazon Web Services" by James Murty. Great book, with lots of Ruby code explaining how to invoke the low level Amazon web services. The book was published in February 2008 and already a bit outdated during summer when I read it, but getting more and more behind. Amazon is implementing new features at such a rapid pace:
  • Public IP address (Elastic IP address), earlier one needed a computer elsewhere with fixed IP address to forward clients to server located at Amazon (e.g. through HTTP 302 or other)
  • Local, permanent file system (Elastic Block Store), earlier one needed to leverage S3
  • Lower prices
  • Windows support, before there were only *nix distributions available
  • Database support with Oracle on Linux and now SQL Server on Windows
  • No more beta but full production with SLA
  • Elasticfox plug-in along with good documentation
So now I have my own simple Windows 2003 server with a fixed IP address and DNS name. Accessing the server goes fine with Remote Desktop. The responsiveness is not always top, but similar to a local VMWare instance. By the way, this is a perfect alternative for VMWare and a serious competitor! I have the smallest server instance (AMI) running, which is obviously virtualised at Amazon. But it looks like dual-core Opteron with 1.66 GB of memory. And bandwidth is phenomenal: downloading Acrobat at more than 8 MByte/s.

Amazon is already announcing future features such as load balancing, monitoring and automatic scaling (automatically launching extra server instances). Strange that charging is still done via credit card. But I assume that big users can get a real invoice with payment terms.

Extra remarks:
  • On Friday Dec. 12, the Amazon evangelist Simone Brunozzi will give a talk at Devoxx conference.
  • Running the server instance during a couple of hours had a cost of 70 dollar cents, mostly because I left the elastic IP address unused for a while

Sunday, October 12, 2008

Simple messaging protocols

Messaging systems such as IBM's WebSphereMQ and similar use proprietary messaging protocols. So some library is always needed at the client side to talk the proprietary language to the messaging server. If such library is not available for your programming language, you're out of luck. Regarding standard API's, JMS seems the only one ever defined.

If a messaging server exposes a simple protocol over HTTP, it becomes possible to talk to the messaging server from any programming language. ActiveMQ is a good example in that area with their STOMP protocol. IBM has the "MQ Bridge for HTTP". And OpenMQ 4.3 now has the UMS protocol.

From quickly skimming over the REST versions of these protocols - UMS of OpenMQ in particular - they are not "pure REST" but rather "REST-RPC Hybrid" (cf. the great book "RESTful Web Services"): 1) HTTP POST is used instead of GET, PUT or DELETE, 2) the actual action is part of the URL parameters and 3) the interactions become stateful through Logon service request.

When I think about a "pure REST" approach for messaging, I expect to see URLs such as http://mq.my-org.be/.../domain/queue. Sending a message becomes a HTTP PUT action. Peeking a message is a HTTP GET action. And receiving a message should become HTTP GET followed by HTTP DELETE (receiving a message is normally "destructive" in the messaging world). How to avoid concurrency issues in this receive scenario with multiple clients receiving the same message is a REST concurrency question that I gladly pass on ;-)

Closing remarks:
  • Another approach is taken by AMQP: this initiative standardizes a binary protocol between client and messaging server. Any AMQP client library in whatever programming language should be able to communicate with any AMQP compliant server. Adoption of AMQP is rather limited.
  • Existing messaging products (WebSphereMQ, SonicMQ, ...) can tunnel their protocol over HTTP(S), but that still requires use of their respective client libraries.
  • Most JMS messaging solutions support .Net (and optionally COM).
  • At Devoxx 2008, Linda Schneider will talk about "Connectivity with OpenMQ" and Bruce Snyder does university session about "ActiveMQ and ServiceMix".

Wednesday, October 8, 2008

TCP/IP vulnerability?

Security Now is a great podcast about all sorts of security topics. Nr 164 is about "Sockstress". There seems to be a serious problem in almost any tcp/ip stack, including those of routers! Steve Gibson (the security person driving this podcast) based himself on a Dutch podcast called "De beveiligingsupdate" ("Security update").

Having some understanding of networking, but not being a specialist, it seems that this attack is launched after the 3-way tcp/ip handshake is done. After such handshake, a reasonable amount of trust has been created, as the server knows the ip address of the client. And implicitely it assumes that the client will behave according to the tcp/ip rules.

So this attack only starts after the tcp/ip connection has been established. First of all, the client reduces its resource consumpption by encoding information about the connection in the sequence numbers in the headers of the packets. As such, it needn't keep state. Secondly, the client doesn't use the TCP/IP stack of the client machine itself but has an implementation in user space, based on raw sockets. And then it starts playing dirty tricks by e.g. responding to the server that it doesn't have any buffer space left. The server will wait a certain amount of time and try to resume sending. By forcing the server to manage this large set of connections with all the resource consumption - memory and timers - the TCP/IP service goes through its knees. And potentially the complete OS crashes! This problem and corresponding attack seems to be known for 3 years, but only now is it coming out in the open.

Anyway, this is the way I understood it. After the DNS poisining issue, this seems a very fundamental attack. If this story is true, and no countermeasures are found, this might become an important issue. Not only crisis in the financial world, but also a crisis in Internet land.

Note: there is a related Dutch podcast called "Ict roddels" (ICT gossip), recommended to native Dutch speakers

Sunday, October 5, 2008

Password renewal in adapters

ESB's use adapters to connect to all sorts of systems: back-end applications, databases, queueing systems, (S)FTP(S) servers, Web Services, HTTP(S) servers or B2B counterparts. The ESB usually uses a technical user account to connect to these systems. Unless the real identity of a human user is carried along to the back end systems (identity propagation).

Larger organizations enforce password change policies. But changing the password with which such technical user connects to one of these other systems is a tough task. The password change in the target system and the ESB need to happen at the same time. And to avoid any problems or disturbing the business, this usually means late at night or in the middle of weekend (when the system goes down for scheduled maintenance).

It would be nice that adapters would provide support for such password changes. One option would be to pre-configure a new password and the datetime from which it should be applied. Another alternative is the configuration of 2 or 3 passwords. If the 'current' password doesn't work, try the other (newer) ones.

PS: similar problem is the changeover of encryption keys

Saturday, October 4, 2008

Oracle in the cloud

While stuck in Belgian traffic jams, I listen a lot to podcasts. One such podcast is the "Oracle Technology Network Techcasts". One of the latest podcasts - recorded at OracleWorld - was about Oracle and "the cloud". Interesting to learn that Oracle products will become available on Amazon's cloud computing infstructure. So Oracle will officially support deployments of its database on EC2. Oracle also makes available pre-configured Amazon Machine Images (AMI) containing the Oracle database.

But more interesting to me was the announcement that Oracle is also making available its Fusion middleware in the cloud. That should mean that it becomes possible to run Oracle's SOA suite, the Oracle BPEL engine or the Oracle B2B server in the cloud!

When checking out the list of AMI's that Oracle makes available, no Fusion middleware yet. Looking forward to get more detailed information about Oracle middleware in the cloud.

Final note: next to Linux, Amazon will also start providing Windows images (virtual machines)