In response to a scalability question, this is the moderately long answer:
Introduction
First, let me state that I don't believe in the word "enterprize", because it has traditionally been abused in the IT world to mean all sorts of mostly negative things, like "enterprize" as in "cost you a fortune", "enterprize" as in "absurd and complicated" etc.
Apart from that, we can stick to the literal meaning of an "enterprize" which may have 5 aspects:
Apart from that, we can stick to the literal meaning of an "enterprize" which may have 5 aspects:
- a large number of users
- a large number of requests and/or data traffic
- a large database, in millions of records
- an extensive set of operations, procedures, workflows
- a strict set of protocols (of deployment, security, manageability) and SOPs
1. Number of users
Here, no truly open source software really suffers, because there is no licensing on the number of users (beware of offers that limit you on that! ). We just put a 32-bit number on the UID and let you have as many as you like.
This will hold as long until the point that too many users request something from your server, which takes us to point 2.
This will hold as long until the point that too many users request something from your server, which takes us to point 2.
2. Number of requests/resources
We have all sorts of physical or soft limitations to the number of requests (and their speed) you can serve at any time.
So, you may have 10k users idling around with just a login session, but the actual trouble begins when 1000 of them decide that they want page X, now. Requests vary in nature, in the time they require your CPU and I/O to be processed.
Do remember that, by default, Postgres will only serve some hundred of connections. You might want to increase this limit, to the cost of RAM which will be reserved for the db. Other system limitations to look for is the open files, number of processes and, of course, always the available RAM.
There is two sides on scaling this thing up: you can add more hardware (on a single or distributed - load balanced - system) or you can resolve the performance curlpits to let the application run lighter.
So, you may have 10k users idling around with just a login session, but the actual trouble begins when 1000 of them decide that they want page X, now. Requests vary in nature, in the time they require your CPU and I/O to be processed.
Do remember that, by default, Postgres will only serve some hundred of connections. You might want to increase this limit, to the cost of RAM which will be reserved for the db. Other system limitations to look for is the open files, number of processes and, of course, always the available RAM.
There is two sides on scaling this thing up: you can add more hardware (on a single or distributed - load balanced - system) or you can resolve the performance curlpits to let the application run lighter.
3. Large database
Our database, Postgres, doesn't practically have any hard limits on the number of records or so. [ well, it does, but they are sky-high! ]
However, a large db will bring up all sorts of performance problems, when sub-optimal queries are used. For example, computing a set of 500 accounts with 5-10M of accounting entries would cause unacceptable delays in the application's responsiveness.
However, a large db will bring up all sorts of performance problems, when sub-optimal queries are used. For example, computing a set of 500 accounts with 5-10M of accounting entries would cause unacceptable delays in the application's responsiveness.
4. Operations, Workflows
There, we talk about the complexity of the ERP schema, and the implications of using it in a extended deployment.
A good ERP will be flexible, and scalable, meaning that it will allow your IT team (you have one, don't you? ) configure it and adapt to your company's complex needs.
How much would it cost to add a form? Or re-route a workflow? Or implement some custom data connector to your legacy systems?
A good ERP will be flexible, and scalable, meaning that it will allow your IT team (you have one, don't you? ) configure it and adapt to your company's complex needs.
How much would it cost to add a form? Or re-route a workflow? Or implement some custom data connector to your legacy systems?
5. Protocols, deployment
A little different from point 4., the deployment has to do with IT rules that you have chosen to follow for all your enterprize software.
Would you blindly download something from the cloudy Internetz and use it in your production servers? Would you just "place the files there" and hope it runs? Could you survive a vague release schedule, and/or distribution methods?
Would you blindly download something from the cloudy Internetz and use it in your production servers? Would you just "place the files there" and hope it runs? Could you survive a vague release schedule, and/or distribution methods?
Conclusion
Over the years, I've tried to address as many of the above points as possible, in several complementing ways:
- extensive optimizations and profiling of the ERP
- debugging, more debugging and counter-measures against bugs
- making the framework more developer-friendly, more easy to hack and adapt to any different needs
- keeping *always* some strict design principles, to ensure that the final product will be deployable in enterprize environments, will have proper release schedules and smooth migrations.
- adding hooks for extension, so that a production server can be amended even in a "hot" production-critical setup.
That's what powers F3, as a matter of fact.