Search Results

Search found 14074 results on 563 pages for 'programmers'.

Page 292/563 | < Previous Page | 288 289 290 291 292 293 294 295 296 297 298 299  | Next Page >

  • ISO 12207: Verification of integration and Unit test validation

    - by user970696
    I have received comments from the supervisor reviewing my thesis. He asked two questions I cannot answer right now: If ISO 12207 says under "Integration verification" that it "checks that components are correctly and completely integrated into a system", how this can be verified without testing, if all testing is validation? How without testing can I know that system is integrated correctly and fully? If unit testing is validation, how does it match the ISO definiton of validation "that requirements for intended use were fulfilled" if its so low level?

    Read the article

  • Portal Server comparisons / TCoO

    - by Scott
    We have a client whom is looking to incorporate Oracle Portal into our next release. I'm newer to this team, but the team is currently working with Apache, so whichever Portal Server we choose will likely incur a bit of a learning curve. Is there any comparison (not marketing) out there which discusses the differences in the servers and/or the total cost of ownership on them? With 5 developers, installing RAD becomes expensive, which I'd assume they'd wish to move onto us with the change to Oracle Portal and WebSphere.

    Read the article

  • Need to re-build an application - how?

    - by Tom
    For our main system, we have a small monitor application that sits outside our network and periodically tries to log in to verify the system still works. We have a problem with the monitor though in that the communications component set (Asta 3 inside Delphi applications) doesn't always connect through. Overall, I'd say it's about 95% reliable, but that other 5% kills the monitor since it will try to log in and hang on the connection attempt (no timeout in the component). This really isn't an issue on the client side of the system since the clients don't disconnect and reconnect repeatedly on the same application instance, but I need a way to make sure the monitor stays up and continues working even when the component fails on a run. I have a few ideas as to which way to have the program run, the main idea being to put the communications inside a threaded data module so that if one thread crashes then another thread can test later and the program keep going. Does this sound like a valid way to go? Any other ideas how to ensure a reliable monitoring application with a less than 100% reliable component? Thanks. P.S. Not sure these tags are the most appropriate. Tried including "system-reliability" as one, but not high enough rep to create.

    Read the article

  • When will java change to 64bit addressing and how can we get there faster?

    - by Ido Tamir
    Having to work with large files now, I would like to know when the java libraries will start switching to long for indexing in their methods. From Inputstreams read(byte[] b, int off, int len) - funnily there is long skip(long) also - to MappedByteBuffer to the basic indexing of arrays and lists, everything is adressed as int. Is there an official plan for enhancment of the libraries? Do initiatives exist to pressure oracle into enhancing the libraries, if there is no official plan yet?

    Read the article

  • A potential new hire may make more money you do..what should I do [migrated]

    - by Ali
    I have a very solid experience in Software Development and Project management. I just learned that the company I work for has posted a new position with half the experience of mine and even lesser set of skills than I have. However, the new position pays the same salary as my current salary while at the same time they declined to pay the asking salary when I was joining the company. How should I counter this? I know there are many projects in the pipeline where my involvement is crucial for the projects to be successful but I don't want to come across threating. Any advise would be helpful..!

    Read the article

  • In a multidisciplicary team, how much should each member's skills overlap?

    - by spade78
    I've been working in embedded software development for this small startup and our team is pretty small: about 3-4 people. We're responsible for all engineering which involves an RF device controlled by an embedded microcontroller that connects to a PC host which runs some sort of data collection and analysis software. I have come to develop these two guidelines when I work with my colleagues: Define a clear separation of responsibilities and make sure each person's contribution to the final product doesn't overlap. Don't assume your colleagues know everything about their responsibilities. I assume there is some sort of technology that I will need to be competent at to properly interface with the work of my colleagues. The first point is pretty easy for us. I do firmware, one guy does the RF, another does the PC software, and the last does the DSP work. Nothing overlaps in terms of two people's work being mixed into the final product. For that to happen, one guy has to hand off work to another guy who will vet it and integrate it himself. The second point is the heart of my question. I've learned the hard way not to trust the knowledge of my colleagues absolutley no matter how many years experience they claim to have. At least not until they've demonstrated it to me a couple of times. So given that whenever I develop a piece of firmware, if it interfaces with some technology that I don't know then I'll try to learn it and develop a piece of test code that helps me understand what they're doing. That way if my piece of the product comes into conflict with another piece then I have some knowledge about possible causes. For example, the PC guy has started implementing his GUI's in .NET WPF (C#) and using LibUSBdotNET for USB access. So I've been learning C# and the .NET USB library that he uses and I build a little console app to help me understand how that USB library works. Now all this takes extra time and energy but I feel it's justified as it gives me a foothold to confront integration problems. Also I like learning this new stuff so I don't mind. On the other hand I can see how this can turn into a time synch for work that won't make it into the final product and may never turn into a problem. So how much experience/skills overlap do you expect in your teammates relative to your own skills? Does this issue go away as the teams get bigger and more diverse?

    Read the article

  • Any ideas about how to make Programming Techniques Class more interesting.

    - by Eedoh
    Hello. I already found similar question here on SO, but almost all the answers were more philosophical, then practical. I'd like You to share some of Your PRACTICAL ideas about how to make my course more interesting. It doesn't matter how much effort it takes from me. I even thought about trying to motivate them to pick some topic in the beginning of the course and to work on it as some kind of real, small, startup project that they could maybe financially exploit once it's finished. But I'm afraid that most of them will not get the project to the end, and that it could be boring to them working on one thing all year long. Also I thought about involving them in Torcs, but I'm afraid most of them wouldn't be up to the task. Btw, Torcs is Car Racing Simulation, but there's an API for developers so they can develop their own AI for the driver, and then race their cars against the other programmer's AI's. I'm not asking here for problem examples, as I asked a separate question about that. I need ideas about making my lectures more interesting and fun.

    Read the article

  • What is So Unique About Node.js?

    - by Adrian Shum
    Recently there has been a lot of praise for Node.js. I am not a developer that has had much exposure to network application. From my bare understanding of Nodes.js, its strength is: we have only one thread handling multiple connections, providing an event-based architecture. However, for example in Java, I can create only one thread using NIO/AIO (which is non-blocking APIs from my bare understanding), and handle multiple connections using that thread, and I provide an event-based architecture to implement the data handling logic (shouldn't be that difficult by providing some callback etc) ? Given JVM being a even more mature VM than V8 (I expect it to run faster too), and event-based handling architecture seems to be something not difficult to create, I am not sure why Node.js is attracting so much attention. Did I miss some important points?

    Read the article

  • Most effective way to do daily standup meeting when a few people are remote

    - by Burhan Ali
    I am a software developer in a small team of seven. We are not an Agile (with a big 'A') team but are experimenting with some aspects of agile. One of these is the daily "standup" meeting. The difficulty here is that for two days of the week we have at least one person working from home so the full team isn't available in the same room. What is the best way to carry out a daily standup in this situation? Some facts that may be relevant: We all work in a single open plan room. We use Skype in our company. We don't have any video conferencing capability. We all work the same hours so there are no timezone complexities involved. The development manager is one of the people who works from home one day a week. Things we have tried: Conference call using Skype: This is tricky for those in the office because you can hear people speak in the room and then a split second later through the headset. This can e very distracting. Conference phone: Awful experience. Hard to get them to work and poor quality audio. Text-based updates using Skype. This is not as engaging and is no different than just firing off a status email in the morning. I have seen other questions about remote collaboration but they are mainly about completely remote teams and/or teams that span multiple time zones. We are not affected by either of these problems. What can we do to make our standup meetings better in these circumstances?

    Read the article

  • git in non-distributed, independent, lone programming ...best practice(s) ?

    - by explorest
    I am currently studying the git documentation to get a hang of distributed version control workflow and use of git command line. I want to first start using git with small, personal, pet projects so to gain experience before doing it on large scale (i.e., bigger projects, team dev). What areas of the git system should I, as a lone player, devote most of my study time to... what parts should I leave for the larger scale work later on. In other words what features of the git system will fully be grasped in team work only, and therefore should not be too involved with at an individual level?

    Read the article

  • Will software development be completely outsourced to Asia?

    - by gablin
    Lots of things are nowadays done in Asia which used to be done in the U.S. or Europe: car manufacturing, making of clothes and shoes, almost all computer components, etc. And it looks that part of software development is heading the same way. In fact, it is already underway. Where I previously worked as a functional tester for a 150+ developers-project, almost a third of the positions were outsourced to India. Will this process go further and further until all software development is done in Asia?

    Read the article

  • How would you design an application with many target platforms and devices?

    - by Pierre 303
    I'm in a very beginning of the design phase of an application that will have to run in the following platforms/devices: Desktop: Windows, Linux & Mac Mobile: Android, iPhone/iPad & Windows Phone 7 Web: Silverlight I will use C# on Mono and I want to maximize code re-usability. Except for the desktop (I'll use WinForms/GTK#), my concern is related to many different GUI that I will face. What would be your approach? Obviously, the views will be different, but what about the controllers, data access, utility classes, etc. Is it really acceptable to share everything but the views?

    Read the article

  • Java or C# for a PL/SQL Developer

    - by OracleDeveloper
    Hello, Can you Please suggest as what should be my next carrer move , I am an Oracle Developer , I worked in Forms and reports and know good PL/SQL and SQL. Now , I am thinking to learn new technology as there no jobs in PL/SQL alone and Oracle front-end Forms and reports are on the verge of extinction. The issue is that I have with Java is its HUGE and I need to learn a lot many other technologies as well ( struts , hibernate , spring etc ) in addtion to adv java and Java EE coming. I am think as which technology can give with edge , with PL/SQL and Oracle ... Thank you.

    Read the article

  • What happened this type of naming convention?

    - by Smith
    I have read so many docs about naming conventions, most recommending both Pascal and Camel naming conventions. Well, I agree to this, its ok. This might not be pleasing to some, but I am just trying to get you opinion why you name you objects and classes in a certain way. What happened to this type of naming conventions, or why are they bad? I want to name a struct, and i prefix it with struct. My reason, so that in IntelliSense, I see all the struct in one place, and anywhere I see the struct prefix, I know it's a struct: structPerson structPosition anothe example is the enum, although I may not prefix it with "enum", but maybe with "enm": enmFruits enmSex again my reason is so that in IntelliSense, I see all my enums in one place. Because, .NET has so many built in data structures, I think this helps me do less searching. Please I used .NET in this example, but I welcome language agnostic answers.

    Read the article

  • How can I gather client's data on Google App Engine without using Datastore/Backend Instances too much?

    - by ruslan
    One of the projects I'm working on is online survey engine. It's my first big commercial project on Google App Engine. I need your advice on how to collect stats and efficiently record them in DataStore without bankrupting me. Initial requirements are: After user finishes survey client sends list of pairs [ID (int) + PercentHit (double)]. This list shows how close answers of this user match predefined answers of reference answerers (which identified by IDs). I call them "target IDs". Creator of the survey wants to see aggregated % for given IDs for last hour, particular timeframe or from the beginning of the survey. Some surveys may have thousands of target/reference answerers. So I created entity public class HitsStatsDO implements Serializable { @Id transient private Long id; transient private Long version = (long) 0; transient private Long startDate; @Parent transient private Key parent; // fake parent which contains target id @Transient int targetId; private double avgPercent; private long hitCount; } But writing HitsStatsDO for each target from each user would give a lot of data. For instance I had a survey with 3000 targets which was answered by ~4 million people within one week with 300K people taking survey in first day. Even if we assume they were answering it evenly for 24 hours it would give us ~1040 writes/second. Obviously it hits concurrent writes limit of Datastore. I decided I'll collect data for one hour and save that, that's why there are avgPercent and hitCount in HitsStatsDO. GAE instances are stateless so I had to use dynamic backend instance. There I have something like this: // Contains stats for one hour private class Shard { ReadWriteLock lock = new ReentrantReadWriteLock(); Map<Integer, HitsStatsDO> map = new HashMap<Integer, HitsStatsDO>(); // Key is target ID public void saveToDatastore(); public void updateStats(Long startDate, Map<Integer, Double> hits); } and map with shard for current hour and previous hour (which doesn't stay here for long) private HashMap<Long, Shard> shards = new HashMap<Long, Shard>(); // Key is HitsStatsDO.startDate So once per hour I dump Shard for previous hour to Datastore. Plus I have class LifetimeStats which keeps Map<Integer, HitsStatsDO> in memcached where map-key is target ID. Also in my backend shutdown hook method I dump stats for unfinished hour to Datastore. There is only one major issue here - I have only ONE backend instance :) It raises following questions on which I'd like to hear your opinion: Can I do this without using backend instance ? What if one instance is not enough ? How can I split data between multiple dynamic backend instances? It hard because I don't know how many I have because Google creates new one as load increases. I know I can launch exact number of resident backend instances. But how many ? 2, 5, 10 ? What if I have no load at all for a week. Constantly running 10 backend instances is too expensive. What do I do with data from clients while backend instance is dead/restarting?

    Read the article

  • Web Services and code lists

    - by 0x0me
    Our team heavily discuss the issues how to handle code list in a web service definition. The design goal is to describe a provider API to query a system using various values. Some of them are catalogs resp. code lists. A catalog or code list is a set of key value pairs. There are different systems (at least 3) maintaining possibly different code lists. Each system should implement the provider API, whereas each system might have different code list for the same business entity eg. think of colors. One system know [(1,'red'),(2,'green')] and another one knows [(1,'lightgreen'),(2,'darkgreen'),(3,'red')] etc. The access to the different provider API implementations will be encapsulated by a query service, but there is already one candidate which might use at least one provider API directly. The current options to design the API discussed are: use an abstract code list in the interface definition: the web service interface defines a well known set of code list which are expected to be used for querying and returning data. Each API provider implementation has to mapped the request and response values from those abstract codelist to the system specific one. let the query component handle the code list: the encapsulating query service knows the code list set of each provider API implementation and takes care of mapping the input and output to the system specific code lists of the queried system. do not use code lists in the query definition at all: Just query code lists by a plain string and let the provider API implementation figure out the right value. This might lead to a loose of information and possibly many false positives, due to the fact that the input string could not be canonical mapped to a code list value (eg. green - lightgreen or green - darkgreen or both) What are your experiences resp. solutions to such a problem? Could you give any recommendation?

    Read the article

  • Breaking 1NF to model subset constraints. Does this sound sane?

    - by Chris Travers
    My first question here. Appologize if it is in the wrong forum but this seems pretty conceptual. I am looking at doing something that goes against conventional wisdom and want to get some feedback as to whether this is totally insane or will result in problems, so critique away! I am on PostgreSQL 9.1 but may be moving to 9.2 for this part of this project. To re-iterate: Does it seem sane to break 1NF in this way? I am not looking for debugging code so much as where people see problems that this might lead. The Problem In double entry accounting, financial transactions are journal entries with an arbitrary number of lines. Each line has either a left value (debit) or a right value (credit) which can be modelled as a single value with negatives as debits and positives as credits or vice versa. The sum of all debits and credits must equal zero (so if we go with a single amount field, sum(amount) must equal zero for each financial journal entry). SQL-based databases, pretty much required for this sort of work, have no way to express this sort of constraint natively and so any approach to enforcing it in the database seems rather complex. The Write Model The journal entries are append only. There is a possibility we will add a delete model but it will be subject to a different set of restrictions and so is not applicable here. If and when we allow deletes, we will probably do them using a simple ON DELETE CASCADE designation on the foreign key, and require that deletes go through a dedicated stored procedure which can enforce the other constraints. So inserts and selects have to be accommodated but updates and deletes do not for this task. My Proposed Solution My proposed solution is to break first normal form and model constraints on arrays of tuples, with a trigger that breaks the rows out into another table. CREATE TABLE journal_line ( entry_id bigserial primary key, account_id int not null references account(id), journal_entry_id bigint not null, -- adding references later amount numeric not null ); I would then add "table methods" to extract debits and credits for reporting purposes: CREATE OR REPLACE FUNCTION debits(journal_line) RETURNS numeric LANGUAGE sql IMMUTABLE AS $$ SELECT CASE WHEN $1.amount < 0 THEN $1.amount * -1 ELSE NULL END; $$; CREATE OR REPLACE FUNCTION credits(journal_line) RETURNS numeric LANGUAGE sql IMMUTABLE AS $$ SELECT CASE WHEN $1.amount > 0 THEN $1.amount ELSE NULL END; $$; Then the journal entry table (simplified for this example): CREATE TABLE journal_entry ( entry_id bigserial primary key, -- no natural keys :-( journal_id int not null references journal(id), date_posted date not null, reference text not null, description text not null, journal_lines journal_line[] not null ); Then a table method and and check constraints: CREATE OR REPLACE FUNCTION running_total(journal_entry) returns numeric language sql immutable as $$ SELECT sum(amount) FROM unnest($1.journal_lines); $$; ALTER TABLE journal_entry ADD CONSTRAINT CHECK (((journal_entry.running_total) = 0)); ALTER TABLE journal_line ADD FOREIGN KEY journal_entry_id REFERENCES journal_entry(entry_id); And finally we'd have a breakout trigger: CREATE OR REPLACE FUNCTION je_breakout() RETURNS TRIGGER LANGUAGE PLPGSQL AS $$ BEGIN IF TG_OP = 'INSERT' THEN INSERT INTO journal_line (journal_entry_id, account_id, amount) SELECT NEW.id, account_id, amount FROM unnest(NEW.journal_lines); RETURN NEW; ELSE RAISE EXCEPTION 'Operation Not Allowed'; END IF; END; $$; And finally CREATE TRIGGER AFTER INSERT OR UPDATE OR DELETE ON journal_entry FOR EACH ROW EXECUTE_PROCEDURE je_breaout(); Of course the example above is simplified. There will be a status table that will track approval status allowing for separation of duties, etc. However the goal here is to prevent unbalanced transactions. Any feedback? Does this sound entirely insane? Standard Solutions? In getting to this point I have to say I have looked at four different current ERP solutions to this problems: Represent every line item as a debit and a credit against different accounts. Use of foreign keys against the line item table to enforce an eventual running total of 0 Use of constraint triggers in PostgreSQL Forcing all validation here solely through the app logic. My concerns are that #1 is pretty limiting and very hard to audit internally. It's not programmer transparent and so it strikes me as being difficult to work with in the future. The second strikes me as being very complex and required a series of contraints and foreign keys against self to make work, and therefore it strikes me as complex, hard to sort out at least in my mind, and thus hard to work with. The fourth could be done as we force all access through stored procedures anyway and this is the most common solution (have the app total things up and throw an error otherwise). However, I think proof that a constraint is followed is superior to test cases, and so the question becomes whether this in fact generates insert anomilies rather than solving them. If this is a solved problem it isn't the case that everyone agrees on the solution....

    Read the article

  • Performing client-side OAuth authorized Twitter API calls versus server side, how much of a difference is there in terms of performance?

    - by Terence Ponce
    I'm working on a Twitter application in Ruby on Rails. One of the biggest arguments that I have with other people on the project is the method of calling the Twitter API. Before, everything was done on the server: OAuth login, updating the user's Twitter data, and retrieving tweets. Retrieving tweets was the heaviest thing to do since we don't store the tweets in our database, so viewing the tweets means that we have to call the API every time. One of the people in the project suggested that we call the tweets through Javascript instead to lessen the load on the server. We used GET search, which, correct me if I'm wrong, will be removed when v1.0 becomes completely deprecated, but that really isn't a concern now. When the Twitter API has migrated completely to v1.1 (again, correct me if I'm wrong), every calls to the API must be authenticated, so we have to authenticate our Javascript requests to the API. As said here: We don't support or recommend performing OAuth directly through Javascript -- it's insecure and puts your application at risk. The only acceptable way to perform it is if you kept all keys and secrets server-side, computed the OAuth signatures and parameters server side, then issued the request client-side from the server-generated OAuth values. If we do exactly what Twitter suggests, the only difference between this and doing everything server-side is that our server won't have to contact the Twitter API anymore every time the user wants to view tweets. Here's how I would picture what's happening every time the user makes a request: If we do it through Javascript, it would be harder on my part because I would have to create the signatures manually for every request, but I will gladly do it if the boost in performance is worth all the trouble. Doing it through Ruby on Rails would be very easy since the Twitter gem does most of the grunt work already, so I'm really encouraging the other people in the project to agree with me. Is the difference in performance trivial or is it significant enough to switch to Javascript?

    Read the article

  • I am afraid that my University is not going to teach me enough information [closed]

    - by Muhklayne
    I attend a University and am a Computer Science major. I have barely entered into the major, as I am a sophomore. However, the coursework I am doing is extremely easy already and I feel as though this degree is going to lead me to a path of knowledge without knowing how to bring it all together. Therefore, I am coming to you to ask where I should begin learning on my own! I am willing to dedicate hours upon hours of learning to code outside of class, as it is truly my passion. I will begin by completing all work on http://www.codecademy.com, however I feel this will not be enough either. I would love to learn to integrate visual languages for video games such as NXA and C# combining it with C++ (as I understand video games can be created in this manner). I would also like to look into LUA and Python scripting. I am asking for advice as to where I should begin my personal studies of learning to program, as with my research it has become quite apparent that simply attaining a degree in Computer Science is quite frankly not enough. Thank you for your time!

    Read the article

  • How do PGP and PEM differ?

    - by Dummy Derp
    Email messages are sent in plain text which means that the messages I send to Derpina are visible to anyone who somehow gets access to them while they are in transit. To overcome this, various encryption mechanisms were developed. PEM and PGP are two of them. PEM - canonically converts-adds digital signature-encrypts and sends PGP does exactly the same. So where do they differ? Or is it that PGP (being a program) is used to generate a PEM message?

    Read the article

  • What conventions or frameworks exist for MVVM in Perl?

    - by Will Sheppard
    We're using Catalyst to render lots of webforms in what will become a large application. I don't like the way all the form data is confusingly into a big hash in the Controller, before being passed to the template. It seems jumbled up and messy for the template. I'm sure there are real disadvantages that I haven't described properly... Are there? One solution is to just decide on a convention for the hash, e.g.: { defaults => { type => ['a', 'b', 'c'] }, input => { type => 'a' }, output => { message => "2 widgets found of type a", widgets => [ 'foo', 'bar' ] } } Another way is to store the page/form data as attributes in a class (a ViewModel?), and pass a whole object to the template, which it could use like this: <p class="message">[% model.message %]<p> [% FOREACH widget IN model.widgets %] Which way is more flexible for large applications? Are there any other solutions or existing Catalyst-compatible frameworks?

    Read the article

  • Dealing with coworkers when developing, need advice [closed]

    - by Yippie-Kai-Yay
    I developed our current project architecture and started developing it on my own (reaching something like, revision 40). We're developing a simple subway routing framework and my design seemed to be done extremely well - several main models, corresponding views, main logic and data structures were modeled "as they should be" and fully separated from rendering, algorithmic part was also implemented apart from the main models and had a minor number of intersection points. I would call that design scalable, customizable, easy-to-implement, interacting mostly based on the "black box interaction" and, well, very nice. Now, what was done: I started some implementations of the corresponding interfaces, ported some convenient libraries and wrote implementation stubs for some application parts. I had the document describing coding style and examples of that coding style usage (my own written code). I forced the usage of more or less modern C++ development techniques, including no-delete code (wrapped via smart pointers) and etc. I documented the purpose of concrete interface implementations and how they should be used. Unit tests (mostly, integration tests, because there wasn't a lot of "actual" code) and a set of mocks for all the core abstractions. I was absent for 12 days. What do we have now (the project was developed by 4 other members of the team): 3 different coding styles all over the project (I guess, two of them agreed to use the same style :), same applies to the naming of our abstractions (e.g CommonPathData.h, SubwaySchemeStructures.h), which are basically headers declaring some data structures. Absolute lack of documentation for the recently implemented parts. What I could recently call a single-purpose-abstraction now handles at least 2 different types of events, has tight coupling with other parts and so on. Half of the used interfaces now contain member variables (sic!). Raw pointer usage almost everywhere. Unit tests disabled, because "(Rev.57) They are unnecessary for this project". ... (that's probably not everything). Commit history shows that my design was interpreted as an overkill and people started combining it with personal bicycles and reimplemented wheels and then had problems integrating code chunks. Now - the project still does only a small amount of what it has to do, we have severe integration problems, I assume some memory leaks. Is there anything possible to do in this case? I do realize that all my efforts didn't have any benefit, but the deadline is pretty soon and we have to do something. Did someone have a similar situation? Basically I thought that a good (well, I did everything that I could) start for the project would probably lead to something nice, however, I understand that I'm wrong. Any advice would be appreciated, sorry for my bad english.

    Read the article

  • Online Poker Game Programng

    - by Eyal
    I am trying to write a massive multiplayer online (mmo) for a poker site, where one user can be on a Flash client and the other on say an iOS client (iPhone / iPad), and would like to know how can interaction between two users be visible on both clients. Do I use MSMQ? AJAX? Other? I need the messaging layer (client interaction messages) to scale up to 100K+ online users to begin with. In other words; What scaleable technology can I use to make game interactions between online users visible to all game participants? Thank you much in advance! Eyal

    Read the article

  • How do I print values of an array in Three Rows?

    - by Ramkumar
    I need this output.. 1 3 5 2 4 6 I want to use array function like array(1,2,3,4,5,6). If I edit this array like array(1,2,3), it means the output need to show like 1 2 3 The concept is maximum 3 column only. If we give array(1,2,3,4,5), it means the output should be 1 3 5 2 4 Suppose we will give array(1,2,3,4,5,6,7,8,9), then it means output is 1 4 7 2 5 8 3 6 9 that is, maximum 3 column only. Depends upon the the given input, the rows will be created with 3 columns. Is this possible with PHP? I am doing small Research & Development in array functions. I think this is possible. Will you help me? For more info: * input: array(1,2,3,4,5,6,7,8,9,10,11,12,13,14,15) * output: 1 6 11 2 7 12 3 8 13 4 9 14 5 10 15

    Read the article

  • How do programers balance the upper or lower case style to name file or folder between work and life?

    - by sojyq
    I am a programmer from China. And I like to use English words to name my files and folders Whether it is for work or life. For example, suck as Movie, Work, QtProjects, Music and so on.And I keep the habit of initial the first letter for file name or folder name in Windows. But now I work on Ubuntu, and I found that all file name and folder name are lowercase in addition to the default folder such as Music, Movie and so on. And then I realize that in Linux world, most peoloe like to use all lowercase to name their files and folders for two reasons (1. Linux is Case sensitive. 2. It is fast for shell command.). And after work, when I switch from Linux to Windows, I confuse to use all lowercase or the first letter uppercase style to name my files in Windows. I'm caught in a dilemma. I think that all lowercase is more efficiency but the first letter uppercase is more readable. I thought for a long time and want to come up with a good answer to blance the two style name conversion. But I failed. I want to ask you that how you balance the uppercase or lowercase habbit in Windows, Mac, Linux between work and personal life style? Thank you very much! (My current solution is that when I am in Linux, I use all lowercase for files and folders, but when I am in Windows and Mac OS X, I couldn't find a good reason to convince me to use all lowercase ( I think in Windows and Mac OS X, the first letter uppercase style for me is more readable and beautiful).

    Read the article

< Previous Page | 288 289 290 291 292 293 294 295 296 297 298 299  | Next Page >