Search Results

Search found 15103 results on 605 pages for 'programmers notepad'.

Page 262/605 | < Previous Page | 258 259 260 261 262 263 264 265 266 267 268 269  | Next Page >

  • Are flag variables an absolute evil?

    - by dukeofgaming
    I remember doing a couple of projects where I totally neglected using flags and ended up with better architecture/code; however, it is a common practice in other projects I work at, and when code grows and flags are added, IMHO code-spaghetti also grows. Would you say there are any cases where using flags is a good practice or even necessary?, or would you agree that using flags in code are... red flags and should be avoided/refactored; me, I just get by with doing functions/methods that check for states in real time instead. Edit: Not talking about compiler flags

    Read the article

  • What is the point of dynamic allocation in C++?

    - by Aerovistae
    I really have never understood it at all. I can do it, but I just don't get why I would want to. For instance, I was programming a game yesterday, and I set up an array of pointers to dynamically allocated little enemies in the game, then passed it to a function which updates their positions. When I ran the game, I got one of those nondescript assertion errors, something about a memory block not existing, I don't know. It was a run-time error, so it didn't say where the problem was. So I just said screw it and rewrote it with static instantiation, i.e.: while(n<4) { Enemy tempEnemy = Enemy(3, 4); enemyVector.push_back(tempEnemy); n++; } updatePositions(&enemyVector); And it immediately worked perfectly. Now sure, some of you may be thinking something to the effect of "Maybe if you knew what you were doing," or perhaps "n00b can't use pointers L0L," but frankly, you really can't deny that they make things way overcomplicated, hence most modern languages have done away with them entirely. But please-- someone -- What IS the point of dynamic allocation? What advantage does it afford? Why would I ever not do what I just did in the above example?

    Read the article

  • How far has a bug pushed you? [closed]

    - by Darknight
    When debugging (hard to find) bugs, I know I've personally gotten so frustrated as to lash out on the keyboard and shout profanities at the monitor. I have repeatability witnessed co-workers throw their computer mouse off the table in anger and frustration. What is the furthest a bastard of bug has ever pushed you? EDIT: Hehehe :D it would seem this bug, er I mean post has pushed the guys to close it... Oh well, very very interesting answers anyway.

    Read the article

  • Best language or tool for automating tedious manual tasks

    - by Jon Hopkins
    We all have tasks that come up from time to time that we think we'd be better off scripting or automating than doing manually. Obviously some tools or languages are better for this than others - no-one (in their right mind) is doing a one off job of cross referencing a bunch of text lists their PM has just given them in assembler for instance. What one tool or language would you recommend for the sort of general quick and dirty jobs you get asked to do where time (rather than elegance) is of the essence? Background: I'm a former programmer, now development manager PM, looking to learn a new language for fun. If I'm going to learn something for fun I'd like it to be useful and this sort of use case is the most likely to come up.

    Read the article

  • Independent projects as a student to show off abilities

    - by tufcat
    I'm an inexperienced student (having learned up to data structures and algorithms through various online resources) of computer science, and I'm hoping to get a job as a developer some time after I've gotten a few independent projects done. My question is, how should I choose which projects to work on? I've been looking around stackoverflow-- people usually say to pick whatever you're interested in, but I don't have enough experience to even know what specifically I'm interested in, and possibly more importantly, I don't know what some common beginner project types are. Essentially, I'm at the gap between course work (and the projects entailed in those classes) and real programming, and I don't quite know how to start. If any of you have any ideas, I'd really appreciate it.

    Read the article

  • Should a stack trace be in the error message presented to the user?

    - by Vilx-
    I've got a bit of an argument at my workplace and I'm trying to figure out who is right, and what is the right thing to do. Context: an intranet web application that our customers use for accounting and other ERP stuff. I'm of the opinion that an error message presented to the user (when things crash) should include as much information as possible, including the stack trace. Of course, it has to start with a nice "An Error has occurred, please submit the below information to the developers" in large, friendly letters. My reasoning is that a screenshot of the crashed application will often be the only easily available source of information. Sure, you can try to get a hold of the client's systems administrator(s), attempt to explain where your log files are, etc, but that will probably be slow and painful (talking to the client representatives mostly is). Also, having an immediate and full information is extremely useful in development, where you don't have to go hunting through the log files to find what you need on every exception. (But that could be solved with a configuration switch.) Unfortunately there has been some kind of "Security audit" (no idea how they did that without the sources... but whatever), and they complained about the full exception messages citing them as a security threat. Naturally, the clients (at least one that I know of) has taken this at face value and now demands that the messages be cleaned. I fail to see how a potential attacker could use a stack trace to figure anything out he couldn't have figured out before. Are there any examples, any documented proof of anyone ever doing that? I think that we should fight this foolish idea, but perhaps I'm the fool here, so... Who's right?

    Read the article

  • Single Responsibility Principle Implementation

    - by Mike S
    In my spare time, I've been designing a CMS in order to learn more about actual software design and architecture, etc. Going through the SOLID principles, I already notice that ideas like "MVC", "DRY", and "KISS", pretty much fall right into place. That said, I'm still having problems deciding if one of two implementations is the best choice when it comes to the Single Responsibility Principle. Implementation #1: class User getName getPassword getEmail // etc... class UserManager create read update delete class Session start stop class Login main class Logout main class Register main The idea behind this implementation is that all user-based actions are separated out into different classes (creating a possible case of the aptly-named Ravioli Code), but following the SRP to a "tee", almost literally. But then I thought that it was a bit much, and came up with this next implementation class UserView extends View getLogin //Returns the html for the login screen getShortLogin //Returns the html for an inline login bar getLogout //Returns the html for a logout button getRegister //Returns the html for a register page // etc... as needed class UserModel extends DataModel implements IDataModel // Implements no new methods yet, outside of the interface methods // Haven't figured out anything special to go here at the moment // All CRUD operations are handled by DataModel // through methods implemented by the interface class UserControl extends Control implements IControl login logout register startSession stopSession class User extends DataObject getName getPassword getEmail // etc... This is obviously still very organized, and still very "single responsibility". The User class is a data object that I can manipulate data on and then pass to the UserModel to save it to the database. All the user data rendering (what the user will see) is handled by UserView and it's methods, and all the user actions are in one space in UserControl (plus some automated stuff required by the CMS to keep a user logged in or to ensure that they stay out.) I personally can't think of anything wrong with this implementation either. In my personal feelings I feel that both are effectively correct, but I can't decide which one would be easier to maintain and extend as life goes on (despite leaning towards Implementation #1.) So what about you guys? What are your opinions on this? Which one is better? What basics (or otherwise, nuances) of that principle have I missed in either design?

    Read the article

  • Started wrong with a project. Should I start over?

    - by solidsnake
    I'm a beginner web developer (one year of experience). A couple of weeks after graduating, I got offered a job to build a web application for a company whose owner is not much of a tech guy. He recruited me to avoid theft of his idea, the high cost of development charged by a service company, and to have someone young he can trust onboard to maintain the project for the long run (I came to these conclusions by myself long after being hired). Cocky as I was back then, with a diploma in computer science, I accepted the offer thinking I can build anything. I was calling the shots. After some research I settled on PHP, and started with plain PHP, no objects, just ugly procedural code. Two months later, everything was getting messy, and it was hard to make any progress. The web application is huge. So I decided to check out an MVC framework that would make my life easier. That's where I stumbled upon the cool kid in the PHP community: Laravel. I loved it, it was easy to learn, and I started coding right away. My code looked cleaner, more organized. It looked very good. But again the web application was huge. The company was pressuring me to deliver the first version, which they wanted to deploy, obviously, and start seeking customers. Because Laravel was fun to work with, it made me remember why I chose this industry in the first place - something I forgot while stuck in the shitty education system. So I started working on small projects at night, reading about methodologies and best practice. I revisited OOP, moved on to object-oriented design and analysis, and read Uncle Bob's book Clean Code. This helped me realize that I really knew nothing. I did not know how to build software THE RIGHT WAY. But at this point it was too late, and now I'm almost done. My code is not clean at all, just spaghetti code, a real pain to fix a bug, all the logic is in the controllers, and there is little object oriented design. I'm having this persistent thought that I have to rewrite the whole project. However, I can't do it... They keep asking when is it going to be all done. I can not imagine this code deployed on a server. Plus I still know nothing about code efficiency and the web application's performance. On one hand, the company is waiting for the product and can not wait anymore. On the other hand I can't see myself going any further with the actual code. I could finish up, wrap it up and deploy, but god only knows what might happen when people start using it. What do you think I should do?

    Read the article

  • Actor library / framework for C++

    - by Giorgio
    In the C++ project I am working for we would like to use something like Scala actors and remote actors (see e.g. this tutorial). Being able to use remote actors (actors living in different processes, possibly on different machines and communicating via TCP/IP) has higher priority for us because we have an application consisting of several processes deployed on different machines. Being able to use several actors living in the same process (possibly different threads) is also interesting, but has lower priority for the moment. On wikipedia I have found some links to actor libraries for C++ and I have started to look at Theron. Before I dive too deep into the details and build an extended example with Theron, I wanted to ask if anybody has experience with any of these libraries and which one they would recommend.

    Read the article

  • How to avoid mediocre CV.

    - by QriousCat
    Though in every project we (testers) face different set challenges, when it comes to CV, more or less we have same responsibilities. For example responsibilities like understanding requirements, preparing and executing test cases, creating defects, liaising with dev, BA teams will be repeated for every project we involve. If we keep writing same responsibilities for every role, CV becomes mediocre and a yarn. In fact most of the testing resumes I have come across are like that. How do I avoid repetition of responsibilities in my resume and make it more interesting? If this is not the correct forum for this question let me know. Thanks in advance for your suggestions.

    Read the article

  • SAP vs Other Technologies

    - by Bunny Rabbit
    I am a fresher just out of collge .Till now i have worked on java,Python,javascript,groovy,django,JS, JQuery and web application develop has been my only intrest. I have been working for an IT company for past three months and it involves working with an erp package SAP and i am working on ABAP. Coming from a world of ORM and languages like python ,SAP and database tables doen't excite me much. With all the development being happening around HTML 5 and android etc i feel quite left out and bored in SAP . can you guys suggest me a proper way forward ?

    Read the article

  • What is the difference between development and R&D?

    - by MainMa
    I was asked by a colleague to explain clearly the difference between ordinary development and research and development (R&D) and was unable to do it. After reading Wikipedia, I still don't have the precise answer. According to Wikipedia (slightly modified): There are two primary models: In one model, the primary function is to develop new products; in the other model, the primary function is to discover and create new knowledge about scientific and technological topics for the purpose of uncovering and enabling development of valuable new products, processes, and services. The first model is confusing. Does it mean that development (not R&D) consists exclusively in adding new features to a product, solving bugs and doing maintenance? What if something which was previously developed as a new feature becomes a separate product? The second model is less confusing, but still, how to qualify whether something is new knowledge or existent knowledge which is just rediscovered? Later, Wikipedia adds that ordinary development is different from R&D because of its: nearly immediate profit or immediate improvement. It's still not clear enough. How to qualify "nearly immediate profit"? What if a task has an immediate profit but requires heavy research? Or if it is basic but has uncertain profit, like the enforcement of a common style over the codebase? For example, does it belong to development or R&D to: Develop an engine which abstracts the access to the database, simplifying and shortening enormously the code of other applications (existent or ones which will be written in future) which should access to the database? Establish a new service-oriented architecture for the entire organization of company resources, in order to move from a bunch of separate and autonomous applications to a set of well-organized, interconnected web services, like what is used by Amazon? Design a new communication protocol to allow faster replication of data between two data centers of the company? Conceive a new type of software testing while working on a specific product, knowing that this type of testing will improve/simplify the testing process? Prove that Functional programming is more appropriate than OOP for a specific application, based on evidence, logic and previous experience? Enhance the existent application by adding gestures on tactile screens, after doing studies and testing that shows that those gestures improve the productivity of the users by a ratio of at least 1.4 for a precise set of tasks? Find a way to strongly enhance the Power usage effectiveness (PUE) of a data center? Create a Domain-Specific Language (DSL)? In short, how could I determine whether I'm doing R&D while working on something?

    Read the article

  • What email providers are case sensitive? [on hold]

    - by Thanatos
    According to RFC 5321, the local-part of email addresses is case-sensitive. However, most providers that I know of (e.g., GMail) are not case-sensitive. (It's actually more complex than that: GMail ignores .s in emails as well.) Is there a list, or source, of the various rules, including case-sensitivity, for various major email providers? Is there a large-ish email provider than has case-sensitive email addresses?

    Read the article

  • Naming methods that perform HTTP GET/POST calls?

    - by antonpug
    In the application I am currently working on, there are generally 3 types of HTTP calls: pure GETs pure POSTs (updating the model with new data) "GET" POSTs (posting down an object to get some data back, no updates to the model) In the integration service, generally we name methods that post "postSomething()", and methods that get, "getSomething()". So my question is, if we have a "GET" POST, should the method be called: getSomething - seeing as the purpose is to obtain data postSomething - since we are technically using POST performSomeAction - arbitrary name that's more relevant to the action What are everyone's thoughts?

    Read the article

  • How to set-up a simple subversion workflow

    - by Milen Bilyanov
    I am trying to set-up a simple SVN workflow at home. I am new to subversion (and programming) so I have been reading the official PDF documentations but still not sure about how to set-up my repository. I am working mainly with python, bash and rsl (Renderman Shading Language) So I already have a /dev structure on my disk as this: http://imageshack.us/f/708/devstructure.png/ And I have a /site structure that links to my /dev folder: http://imageshack.us/f/651/sitestructure.png/ So obviously starting to use SVN will change this approach that I already have in place. The question is when I am setting-up my SVN repository for the work I do in my /dev folder: Will I set-up a separate repository for each different programming platform? and Where exactly I should be placing my repository? Thanks.

    Read the article

  • Difference between bug, defect and flaw

    - by Hossein
    I was reading "Software Security: Building Security In" and in the first chapter I faced with 3 terms: bug, defect and flaw. The author gave a definition for each of them but I couldn't completely understand these. Can someone give me some examples for each term? What is a defect and what is a flaw? I think I know what bug is, a bug is a malfunction of a part of system which produces undesirable result, be it crashing on a wrong input or miscalculating a series of computations. Can someone elaborate more and correct me if I am wrong in this? UPDATE To be more precise in the book I mentioned above, they (the words) are presented in a way to make a distinction, that's why I am asking to know more. In that book there are some examples denoting which sample belongs to what and which category. For example: Buffer overflow is said to be a bug and issues in method overriding (subclassing issues) is being related to flaw category. Again race condition handling issues are considered bugs and Error-handling problems (fails open) are told to be flaws! I want more elaboration on these regards.

    Read the article

  • Java EE Web Services study guides

    - by Marthin
    I´m going for the Java EE 6 Web Services Developer certificat but I´m having a hard time to find som solid study guides and mock exams. I already have the JPA and very soon EJB cert so i´m not new to this stuff but I´v looked at coderanch and other places but all information seems a bit outdated. So any tips for books, mock exams free or not, tutorials or other guides would be very much appreciated. EDIT: I will of course read all JSR´s needed.

    Read the article

  • What layer to introduce human readable error messages?

    - by MrLane
    One of the things that I have never been happy with on any project I have worked on over the years and have really not been able to resolve myself is exactly at what tier in an application should human readable error information be retrieved for display to a user. A common approach that has worked well has been to return strongly typed/concrete "result objects" from the methods on the public surface of the business tier/API. A method on the interface may be: public ClearUserAccountsResult ClearUserAccounts(ClearUserAccountsParam param); And the result class implementation: public class ClearUserAccountsResult : IResult { public readonly List<Account> ClearedAccounts{get; set;} public readonly bool Success {get; set;} // Implements IResult public readonly string Message{get; set;} // Implements IResult, human readable // Constructor implemented here to set readonly properties... } This works great when the API needs to be exposed over WCF as the result object can be serialized. Again this is only done on the public surface of the API/business tier. The error message can also be looked up from the database, which means it can be changed and localized. However, it has always been suspect to me, this idea of returning human readable information from the business tier like this, partly because what constitutes the public surface of the API may change over time...and it may be the case that the API will need to be reused by other API components in the future that do not need the human readable string messages (and looking them up from a database would be an expensive waste). I am thinking a better approach is to keep the business objects free from such result objects and keep them simple and then retrieve human readable error strings somewhere closer to the UI layer or only in the UI itself, but I have two problems here: 1) The UI may be a remote client (Winforms/WPF/Silverlight) or an ASP.NET web application hosted on another server. In these cases the UI will have to fetch the error strings from the server. 2) Often there are multiple legitimate modes of failure. If the business tier becomes so vague and generic in the way it returns errors there may not be enough information exposed publicly to tell what the error actually was: i.e: if a method has 3 modes of legitimate failure but returns a boolean to indicate failure, you cannot work out what the appropriate message to display to the user should be. I have thought about using failure enums as a substitute, they can indicate a specific error that can be tested for and coded against. This is sometimes useful within the business tier itself as a way of passing via method returns the specifics of a failure rather than just a boolean, but it is not so good for serialization scenarios. Is there a well worn pattern for this? What do people think? Thanks.

    Read the article

  • Should I be worried if I solve a lot of my problems the same way?

    - by Bryan Harrington
    I really enjoy programming games and puzzle creators/games. I find myself engineering a lot of these problems the same way and ultimately using similar technique to program them that I'm really comfortable with. To give you brief insight, I like to create graphs where nodes are represented with objects. These object hold data such as coordinates, positions and of course references to other neighboring objects. I'll place them all in a data structure and make decisions on this information in a "game loop". While this is a brief example, its not exact in all situations. It's just one way I feel really comfortable with. Is this bad?

    Read the article

  • Script language native extensions - avoiding name collisions and cluttering others' namespace

    - by H2CO3
    I have developed a small scripting language and I've just started writing the very first native library bindings. This is practically the first time I'm writing a native extension to a script language, so I've run into a conceptual issue. I'd like to write glue code for popular libraries so that they can be used from this language, and because of the design of the engine I've written, this is achieved using an array of C structs describing the function name visible by the virtual machine, along with a function pointer. Thus, a native binding is really just a global array variable, and now I must obviously give it a (preferably good) name. In C, it's idiomatic to put one's own functions in a "namespace" by prepending a custom prefix to function names, as in myscript_parse_source() or myscript_run_bytecode(). The custom name shall ideally describe the name of the library which it is part of. Here arises the confusion. Let's say I'm writing a binding for libcURL. In this case, it seems reasonable to call my extension library curl_myscript_binding, like this: MYSCRIPT_API const MyScriptExtFunc curl_myscript_lib[10]; But now this collides with the curl namespace. (I have even thought about calling it curlmyscript_lib but unfortunately, libcURL does not exclusively use the curl_ prefix -- the public APIs contain macros like CURLCODE_* and CURLOPT_*, so I assume this would clutter the namespace as well.) Another option would be to declare it as myscript_curl_lib, but that's good only as long as I'm the only one who writes bindings (since I know what I am doing with my namespace). As soon as other contributors start to add their own native bindings, they now clutter the myscript namespace. (I've done some research, and it seems that for example the Perl cURL binding follows this pattern. Not sure what I should think about that...) So how do you suggest I name my variables? Are there any general guidelines that should be followed?

    Read the article

  • How to add a subclass to a viewcontroller in storyboard

    - by Ken Barlo
    Here's what I've created I created a view controller element (in storyboard) I created a new viewcontroller .h and .m file Here's my issue Can't seem to figure out how to get the content that I've added to the .m file to show up on the view controller element in the storyboard once the app is launched and the selection for that view controller element is made in the iPhone simulator. Although I am able to see content that is already added to a .m file on the home screen and activity screen when the selection is made in the iPhone simulator. Hope that makes sense, if not please ask and I'll be more than happy to provide more info.

    Read the article

  • How to learn programming in Kindergarten?

    - by Kinder
    Last time I asked for peer review on a new language called KinderScript, which its Code Division Multiple Access succinct style looked like white noise that saturated two police reviewer's narrow band. The question has only 1 hour life with 38 views shortly after the shouting of shut-up-leave-now. Ok, That's totally off topic. That is not the question. I'm asking a peer review on the design of KinderScript [1], within the context of an intriguing: "How to learn programming in kindergarten?" [1] http://code.google.com/p/ac-me/downloads/detail?name=kinder.pdf&can=2&q= Thanks for any feedback. No police please. I choose this forum to ask because here has not only many professional but also many new leaners. Both views are appreciated.

    Read the article

  • How to provide value?

    - by Francisco Garcia
    Before I became a consultant all I cared about was becoming a highly skilled programmer. Now I believe that what my clients need is not a great hacker, coder, architect... or whatever. I am more and more convinced every day that there is something of greater value. Everywhere I go I discover practices where I used to roll my eyes in despair. I saw the software industry with pink glasses and laughed or cried at them depending on my mood. I was so convinced everything could be done better. Now I believe that what my clients desperately need is finding a balance between good engineering practices and desperate project execution. Although a great design can make a project cheap to maintain thought many years, usually it is more important to produce quick fast and cheap, just to see if the project can succeed. Before that, it does not really matters that much if the design is cheap to maintain, after that, it might be too late to improve things. They need people who get involved, who do some clandestine improvements into the project without their manager approval/consent/knowledge... because they are never given time for some tasks we all know are important. Not all good things can be done, some of them must come out of freewill, and some of them must be discussed in order to educate colleagues, managers, clients and ourselves. Now my big question is. What exactly are the skills and practices aside from great coding that can provide real value to the economical success of software projects? (and not the software architecture alone)

    Read the article

  • Dynamically load and call delegates based on source data

    - by makerofthings7
    Assume I have a stream of records that need to have some computation. Records will have a combination of these functions run Sum, Aggregate, Sum over the last 90 seconds, or ignore. A data record looks like this: Date;Data;ID Question Assuming that ID is an int of some kind, and that int corresponds to a matrix of some delegates to run, how should I use C# to dynamically build that launch map? I'm sure this idea exists... it is used in Windows Forms which has many delegates/events, most of which will never actually be invoked in a real application. The sample below includes a few delegates I want to run (sum, count, and print) but I don't know how to make the quantity of delegates fire based on the source data. (say print the evens, and sum the odds in this sample) using System; using System.Threading; using System.Collections.Generic; internal static class TestThreadpool { delegate int TestDelegate(int parameter); private static void Main() { try { // this approach works is void is returned. //ThreadPool.QueueUserWorkItem(new WaitCallback(PrintOut), "Hello"); int c = 0; int w = 0; ThreadPool.GetMaxThreads(out w, out c); bool rrr =ThreadPool.SetMinThreads(w, c); Console.WriteLine(rrr); // perhaps the above needs time to set up6 Thread.Sleep(1000); DateTime ttt = DateTime.UtcNow; TestDelegate d = new TestDelegate(PrintOut); List<IAsyncResult> arDict = new List<IAsyncResult>(); int count = 1000000; for (int i = 0; i < count; i++) { IAsyncResult ar = d.BeginInvoke(i, new AsyncCallback(Callback), d); arDict.Add(ar); } for (int i = 0; i < count; i++) { int result = d.EndInvoke(arDict[i]); } // Give the callback time to execute - otherwise the app // may terminate before it is called //Thread.Sleep(1000); var res = DateTime.UtcNow - ttt; Console.WriteLine("Main program done----- Total time --> " + res.TotalMilliseconds); } catch (Exception e) { Console.WriteLine(e); } Console.ReadKey(true); } static int PrintOut(int parameter) { // Console.WriteLine(Thread.CurrentThread.ManagedThreadId + " Delegate PRINTOUT waited and printed this:"+parameter); var tmp = parameter * parameter; return tmp; } static int Sum(int parameter) { Thread.Sleep(5000); // Pretend to do some math... maybe save a summary to disk on a separate thread return parameter; } static int Count(int parameter) { Thread.Sleep(5000); // Pretend to do some math... maybe save a summary to disk on a separate thread return parameter; } static void Callback(IAsyncResult ar) { TestDelegate d = (TestDelegate)ar.AsyncState; //Console.WriteLine("Callback is delayed and returned") ;//d.EndInvoke(ar)); } }

    Read the article

  • Library Organization in .NET

    - by Greg Ros
    I've written a .NET bitwise operations library as part of my projects (stuff ranging from get MSB set to some more complicated bitwise transformations) and I mean to release it as free software. I'm a bit confused about a design aspect of the library, though. Many of the methods/transformations in the library come with different endianness. A simple example is a getBitAt method that regards index 0 as the least significant bit, or the most significant bit, depending on the version used. In practice, I've found that using separate functions for different endianness results in much more comprehensible and reusable code than assuming all operations are little-endian or something. I'm really stumped regarding how best to package the library. Should I have methods that have LE and BE versions take an enum parameter in their signature, e.g. Endianness.Little, Endianness.Big? Should I have different static classes with identically named methods? such as MSB.GetBit and LSB.GetBit On a much wider note, is there a standard I could use in cases like this? Some guide? Is my design issue trivial? I have a perfectionist bent, and I sometimes get stuck on tricky design issues like this... Note: I've sort of realized I'm using endianness somewhat colloquially to refer to the order/place value of digital component parts (be they bits, bytes, or words) in a larger whole, in any setting. I'm not talking about machine-level endianness or serial transmission endianness. Just about place-value semantics in general. So there isn't a context of targeting different machines/transmission techniques or something.

    Read the article

< Previous Page | 258 259 260 261 262 263 264 265 266 267 268 269  | Next Page >