Search Results

Search found 14074 results on 563 pages for 'programmers'.

Page 392/563 | < Previous Page | 388 389 390 391 392 393 394 395 396 397 398 399  | Next Page >

  • Serializing network messages

    - by mtsvetkov
    I am writing a network wrapper around boost::asio and was wondering what is a good and simple way to serialize my messages. I have a message factory which can take care of dispatching the data to the correct builder, but I want to know if there are any established solutions for getting the binary data on the sender side and consequently passing the data for deserialization on the receiver end. Some options I've explored are: passing a pointer to a char[] to the serialize/deserialize functions (for serialize to write to, and deserialize to read from), but it's difficult to enforce buffer size this way; building on that, I decided to have the serialize function return a boost::asio::mutable_buffer, however ownership of the memory gets blurred between multiple classes, as the network wrapper needs to clean up the memory allocated by the message builder. I have also seen solutions involving streambuf's and stringstream's, but manipulating binary data in terms of its string representation is something I want to avoid. Is there some sort of binary stream I can use instead? What I am looking for is a solution (preferrably using boost libs) that lets the message builder dictate the amount of memory allocated during serialization and what that would look like in terms of passing the data around between the wrapper and message factory/message builders. PS. Messages contain almost exclusively built-in types and PODs and form a shallow but wide hierarchy for the sake of going through a factory. Note: a link to examples of using boost::serialization for something like this would be appreciated as I'm having difficulties figuring out the relation between it and buffers.

    Read the article

  • FP for simulation and modelling

    - by heaptobesquare
    I'm about to start a simulation/modelling project. I already know that OOP is used for this kind of projects. However, studying Haskell made me consider using the FP paradigm for modelling a system of components. Let me elaborate: Let's say I have a component of type A, characterised by a set of data (a parameter like temperature or pressure,a PDE and some boundary conditions,etc.) and a component of type B, characterised by a different set of data(different or same parameter, different PDE and boundary conditions). Let's also assume that the functions/methods that are going to be applied on each component are the same (a Galerkin method for example). If I were to use an OOP approach, I would create two objects that would encapsulate each type's data, the methods for solving the PDE(inheritance would be used here for code reuse) and the solution to the PDE. On the other hand, if I were to use an FP approach, each component would be broken down to data parts and the functions that would act upon the data in order to get the solution for the PDE. This approach seems simpler to me assuming that linear operations on data would be trivial and that the parameters are constant. What if the parameters are not constant(for example, temperature increases suddenly and therefore cannot be immutable)? In OOP, the object's (mutable) state can be used. I know that Haskell has Monads for that. To conclude, would implementing the FP approach be actually simpler,less time consuming and easier to manage (add a different type of component or new method to solve the pde) compared to the OOP one? I come from a C++/Fortran background, plus I'm not a professional programmer, so correct me on anything that I've got wrong.

    Read the article

  • What is the most professional way to deal with another programmer who has checked out mentally?

    - by hal10001
    Lead... same project I'm on... shows decreasing interest in project work, especially lead activities. This has been going on for awhile now, and some animosity is starting to grow between us based upon decisions made and overall attitude toward client interactions and tasks. This person is not necessarily a bad programmer, but I can tell is mentally checking out and shutting down. Generally speaking, how do you deal with this behavior?

    Read the article

  • How to suggest changes as a recently-hired employee ?

    - by ereOn
    Hi, I was recently hired in a big company (thousands of people, to give an idea of the size). They said they hired me because of my rigor and because I was, despite my youngness (i'm 25), experienced as a C/C++ programer. Now that I'm in, I can see that the whole system is old and often uses obsolete technologies. There is no naming convention (files, functions, variables, ...), they don't use Version Control, don't use exceptions or polymorphism and it seems like almost everybody lost his passion (some of them are only 30 years old). I'd like to suggest somes changes but i don't want to be "the new guy that wants to change everything just because he doesn't want to fit in". I tried to "fit in", but actually, It takes me one week to do what I would do in one afternoon, just because of the poor tools we're forced to use. A lot my collegues never look at the new "things" and techniques that people use nowadays. It's like they just given up. The situation is really frustrating. Have you ever been in a similar situation and, if so, what advices would you give me ? Is there a subtle way of changing things without becoming the black sheep here ? Or should I just give up my passion and energy as well ? Thank you. Updates Just in case (if anyone cares): following your precious advices I was able to suggest changes and am now in charge of the team that must create and deploy Subversion :D Thanks to all of you !

    Read the article

  • Do I need to notify a user if I am using statistics software in an iPhone app?

    - by Chris
    Hello, I am currently creating a (very simple) Objective-C client to send basic statistical data to my server for an iPhone app - just things like the state of the app (first-launch or launch, error, etc), along with the make/model/version (i.e.: "iPod touch 4.2"). No personally identifiable information or location data is sent. Is there anything, in the Apple Developer agreement or otherwise, that states that I must notify the user if I am doing this? I'm not interested in selling the data or anything, I just want to use the data to make my apps better. I am not adverse to telling the user I am doing this if it is required, I just don't want to scare the users (the paranoid "oooh, they're tracking me, they know exactly where I am" crowd) if I don't have to. Thanks for any advice.

    Read the article

  • When to learn the command line version of a programming tool ?

    - by explorest
    Almost every programming tool has a command line version; many of which also have a gui version. It takes a lot of time and memorization effort to learn the different commands and various options/switches om the command line version. So I have a couple of questions (which are not necessarily mutually exclusive): 1) When would you bother learning/memorizing the commands in the command line version of a tool which also comes in a gui version ? 2) What tools should I learn the command line version of ? .... compilers ? version control system ? etc, etc

    Read the article

  • Will an online degree get you a job that requires "CS or equivalent 4-year degree"? [on hold]

    - by qel
    I'm a nerdy slacker type who didn't get my life together till I was 30. I've had a real job for a couple years doing C#/SQL. I've gotten several raises, but I'm making less than most developers, and the atmosphere is ... not positive. Looking for a new job, I think my applications get thrown out because I don't have a degree. And I want to finish a Bachelor's just to feel like less of a loser. I have a lot of college credits from 1996-2003 and a low GPA, so I don't know if that's worth much. An online degree looks like a good option, but I just don't know what I should be looking at for online schools because they all look like fake degrees. If they had programs equivalent to a real Comp Sci degree, I don't think they would have weird sounding names like they do. University of Phoenix has a B.S./Information Technology-Software Engineering. DeVry has a B.S./Computer Engineering Technology program. But that's not CS, and most other things I see have even more fake-sounding names. Are these useless degrees? Some people say DeVry and UoP are acceptable, some people say they're a joke. I have enough experience now, though, that maybe all I'm missing is being able to check the box that I have a 4-year degree. Harvard Extension seems like a real degree, even if it isn't a real Harvard degree, but I'd have to live there at least 3 months, which kinda defeats the purpose of an online degree fitting around work.

    Read the article

  • Is it worth moving from Microsoft tech to Linux, NodeJS & other open source frameworks to save money for a start-up?

    - by dormisher
    I am currently getting involved in a startup, I am the only developer involved at the moment, and the other guys are leaving all the tech decisions up to me at the moment. For my day job I work at a software house that uses Microsoft tech on a day to day basis, we utilise .NET, SqlServer, Windows Server etc. However, I realise that as a startup we need to keep costs down, and after having a brief look at the cost of hosting for Windows I was shocked to see some of the prices for a dedicated server. The cheapest I found was £100 a month. Also if the business needs to scale in the future and we end up needing multiple servers, we could end up shelling out £10's of £000's a year in SQL Server / Windows Server licenses etc. I then had a quick look at the price of Linux hosting for a dedicated server and saw the price was waaaaaay lower than windows hosting. One place was offering a machine with 2 cores for less than £20 a month. This got me thinking maybe the way to go is open source on Linux. As I write a lot of Javascript at work (I'm working on a single page backbone app at the moment), I thought maybe NodeJS and a web framework like Express would be cool to use. I then thought that instead of using SQL why not use an open source NoSQL database like MongoDB, which has great support on NodeJS? My only concern is that some of the work the application is going to do is going to be dynamically building images and various other image related stuff, i.e. stuff that is quite CPU heavy - so I'm thinking of maybe writing anything CPU heavy in C++ and consuming it as a module in Node. That's the background - but basically is Linux a good match for: Hosting a NodeJS/Express site? Compiling C++ node modules? Using a NoSQL DB like MongoDB? And is it a good idea to move to these unfamiliar technologies to save money?

    Read the article

  • why the next button doesn't work in Joomla! 2.5.4 Installation [closed]

    - by rahul
    I was trying to insall the Joomla! 2.5.4 Installation. But I got stucked in the first step only. The button doesn't respond on clicks. I tried previous version like 1.5.26. But here also the process got hang after the 3rd step. In the 4th step the next button doesn't work as before. what to do, I am in dilemma. I am using XAMPP server for my localhost,please guide me, I lost complete one day in installing Joomla.

    Read the article

  • What term is used to describe running frequent batch jobs to emulate near real time

    - by Steven Tolkin
    Suppose users of application A want to see the data updated by application B as frequently as possible. Unfortunately app A or app B cannot use message queues, and they cannot share a database. So app B writes a file, and a batch job periodically checks to see if the file is there, and if load loads it into app A. Is there a name for this concept? A very explicit and geeky description: "running very frequent batch jobs in a tight loop to emulate near real time". This concept is similar to "polling". However polling has the connotation of being very frequent, multiple times per second, whereas the most often you would run a batch job would be every few minutes. A related question -- what is the tightest loop that is reasonable. Is it 1 minute of 5 minutes or ...? Recall that the batch jobs are started by a batch job scheduler (e.g. Autosys, Control M, CA ESP, Spring Batch etc.) and so running a job too frequently would causes overhead and clutter.

    Read the article

  • UML Receptions and AcceptEventActions

    - by Silli
    What shall be the relationship between the receptions of a class (was classifier before Aadaam correction) and the AcceptEventActions in the activity describing the behavior of its instances? I understand the former is related to signals reception of the type while the latter is related to runtime ReceiveSignalEvent events of the class instances (objects). But it is not totally clear to me how to express consistency among these constructs.

    Read the article

  • What do I need to learn to decide on rename/recompile source package names because of company rebranding?

    - by Roberto Linares
    My company is currently at a rebranding process and the brand names have been used in the sources' package names but these names are only visible to developers who maintain this code so nobody from project management is really interested in changing them considering also that it would imply the recompiling of several old components. What factors do I need to consider when deciding on a change like that? I don't know if I should worry about legal issues or not and if so, how to address this with project management. More background details. I have all the sources and dependencies but since the company rebranding, other development areas have adopted some of the code that needs package name-changing so I cannot take the decision only by myself so I don't make everyone else's code to crash with my core components and I cannot change other areas' code without the permission of those areas' users so yes, my concern is more political than technical. I am going try to coordinate the involved it areas to make the change anyway, since it seems to be the best approach.   Unfortunatelly in my company there's no continuous integration build server so we build our code manually on demand and to get something to production I have to justify the change (even just the package name changing) to QA with an user requirement and some other bureaucratic documentation so that's why I was hesitating the change in first place.

    Read the article

  • Why don't languages use explicit fall-through on switch statements?

    - by zzzzBov
    I was reading Why do we have to use break in switch?, and it led me to wonder why implicit fall-through is allowed in some languages (such as PHP and JavaScript), while there is no support (AFAIK) for explicit fall-through. It's not like a new keyword would need to be created, as continue would be perfectly appropriate, and would solve any issues of ambiguity for whether the author meant for a case to fall through. The currently supported form is: switch (s) { case 1: ... break; case 2: ... //ambiguous, was break forgotten? case 3: ... break; default: ... break; } Whereas it would make sense for it to be written as: switch (s) { case 1: ... break; case 2: ... continue; //unambiguous, the author was explicit case 3: ... break; default: ... break; } For purposes of this question lets ignore the issue of whether or not fall-throughs are a good coding style. Are there any languages that exist that allow fall-through and have made it explicit? Are there any historical reasons that switch allows for implicit fall-through instead of explicit?

    Read the article

  • Extension objects pattern

    - by voroninp
    In this MSDN Magazine article Peter Vogel describes Extension Objects partten. What is not clear is whether extensions can be later implemented by client code residing in a separate assembly. And if so how in this case can extension get acces to private members of the objet being extended? I quite often need to set different access levels for different classes. Sometimes I really need that descendants does not have access to the mebmer but separate class does. (good old friend classes) Now I solve this in C# by exposing callback properties in interface of the external class and setting them with private methods. This also alows to adjust access: read only or read|write depending on the desired interface. class Parent { private int foo; public void AcceptExternal(IFoo external) { external.GetFooCallback = () => this.foo; } } interface IFoo { Func<int> GetFooCallback {get;set;} } Other way is to explicitly implement particular interface. But I suspect more aspproaches exist.

    Read the article

  • Is it a good programming practice to have a class with several .h files?

    - by Jim Thio
    I suppose the class have several different interfaces. Some it shows to some class, some it shows to other classes. Are there any good reason for that? One thing I can think of is with one .h per class, interface would either be public or private. What about if I want some interface to be available to some friends' class and some interface to be truly public? Sample: @interface listNewController:BadgerStandardViewViewController <UITableViewDelegate,UITableViewDataSource,UITextFieldDelegate,NSFetchedResultsControllerDelegate,UIScrollViewDelegate,UIGestureRecognizerDelegate> { } @property (nonatomic) IBOutlet NSFetchedResultsController *FetchController; @property (nonatomic) IBOutlet UITextField *searchBar1; @property (nonatomic) IBOutlet UITableView *tableViewA; + (listNewController *) singleton; //For Easier Access -(void)collapseAll; -(void)TitleViewClicked:(TitleView *) theTitleView; -(NSUInteger) countOfEachSection:(NSInteger)section; @end Many of those public properties and function are only ever called by just one other classes. I wonder why I need to make them available to many classes. It's in Objective-c by the way

    Read the article

  • And at what point of modification to the original does source code with no license become owned by me?

    - by nathansizemore
    I've recently come across a publicly viewable project on Github that has no license associated with it. In this repo, there is a file with the logic and most of the code needed to work as a piece of a project I am working on. Not verbatim, but about 60% of it I'd like to use with various modifications. Once my code base is a little bit more stable, I plan to release what I've done under the WTFPL License. I've emailed the repo owner, and so far have not gotten a reply. I know I have the rights to fork the repo, but if I release a stripped down and modified version of the other project's file with mine, under the WTFPL, am I infringing on copyrights? Per Github's Terms of Service, by submitted a project on Github and making it viewable to the public, you are allowing other users to see and fork your project. Doesn't say anything about modifying, distributing, or using the fork. And at what point of modification to the original does it become owned by me?

    Read the article

  • How should my local git workflow work?

    - by Anonymous -
    At home, I have a server that is running some software (on a LAMP stack, but only accessible internally). I have another machine and a laptop that I both use for developing said software. What is the best workflow for me? Should I have a repository on my local server, create a live branch, staging branch and development branch, then checkout the development branch from my laptop/development PC to work on, commit that back when I'm done, then merge the development branch with the staging branch for testing, before further merging to the live branch? Would I simply checkout the production branch to my /www/var/ on my server? Or am I thinking/going about this all wrong? Thanks.

    Read the article

  • Google App Engine: How to be notified when APIs change or become available?

    - by herpylderp
    I am thinking about writing a GAE app but am a little hesitant because the EULA gives Google full rights to change their APIs anytime they want, for any reason. Obviously, they'd be out of business quick if they just 'upped and refactored their entire APIs, so I have to imagine they have some kind of notification system, perhaps even an RSS feed, etc. to notify developers well in advance of looming changes, or new features coming out in future releases. However, for the life of me I can't seem to find any trace of the existence of such a notification system. Perhaps the Google forums is the only place to get such updates? I guess I'm asking any battle-worn GAE veterans for re-assurance that there are reliable ways of getting notifications about policy or API changes from Google such that I could react and make the necessary app changes without production breaking or impacting any clients. Thanks in advance!

    Read the article

  • Is it reasonable for REST resources to be singular and plural?

    - by Evan
    I have been wondering if, rather than a more traditional layout like this: api/Products GET // gets product(s) by id PUT // updates product(s) by id DELETE // deletes (product(s) by id POST // creates product(s) Would it be more useful to have a singular and a plural, for example: api/Product GET // gets a product by id PUT // updates a product by id DELETE // deletes a product by id POST // creates a product api/Products GET // gets a collection of products by id PUT // updates a collection of products by id DELETE // deletes a collection of products (not the products themselves) POST // creates a collection of products based on filter parameters passed So, to create a collection of products you might do: POST api/Products {data: filters} // returns api/Products/<id> And then, to reference it, you might do: GET api/Products/<id> // returns array of products In my opinion, the main advantage of doing things this way is that it allows for easy caching of collections of products. One might, for example, put a lifetime of an hour on collections of products, thus drastically reducing the calls on a server. Of course, I currently only see the good side of doing things this way, what's the downside?

    Read the article

  • What Are Some Tips For Writing A Large Number of Unit Tests?

    - by joshin4colours
    I've recently been tasked with testing some COM objects of the desktop app I work on. What this means in practice is writing a large number (100) unit tests to test different but related methods and objects. While the unit tests themselves are fairly straight forward (usually one or two Assert()-type checks per test), I'm struggling to figure out the best way to write these tests in a coherent, organized manner. What I have found is that copy and Paste coding should be avoided. It creates more problems than it's worth, and it's even worse than copy-and-paste code in production code because test code has to be more frequently updated and modified. I'm leaning toward trying an OO-approach using but again, the sheer number makes even this approach daunting from an organizational standpoint due to concern with maintenance. It also doesn't help that the tests are currently written in C++, which adds some complexity with memory management issues. Any thoughts or suggestions?

    Read the article

  • There are 2 jobs available - which one sounds better all round [closed]

    - by Steve Gates
    I am currently employed at a company where we scrape by each year breaking even, sometimes having a little profit. The development environment is very relaxed and we have a laugh. My colleagues are not interested in improving their knowledge unless they have to, so trying to get them to adopt things like TDD is a non-starter. My development manager is stuck in .Net 2 land and refuses to use things like LINQ. He over complicates architecture and writes very unreadable code, heres an example SortedList<int,<SortedList<int,SortedList<int, MyClass>>>> The MD of the company has no drive and lets the one sales guy bring in the contracts. We are not busy all the time and this allows me time to look at new technology and learn. In terms of using things like TDD, my development manager has no problem with it and can kind of see the purpose of it, he just wont use it himself. This means I am alone in learning new things and am often resorting to StackOverflow to make sure I get things right. The company has a lot of flexibility, I can work from home if needs be and when my daughter was born they let me work from home 1 day a week however they expect this flexibility in return often asking me to travel occasionally on a Friday afternoon for the following week. Sometimes its abroad. We are also pretty much on call 24/5 as we have engineers in various countries. Also we have no testers so most of the testing is done by us developers and some testing by engineers. Either way no-one likes testing! I have been offered a role at a company I worked at 5 years ago. They were quite Victorian in their working practices but it appears to have relaxed now although I suspect still reasonably formal. There is a new team of developers I don't know and they are about to move to new offices. The team lead is a guy that was there when I was and I get the impression he takes his role seriously and likes his formal procedures and documentation. I think some of the Victorian practices may have rubbed off on him. However he did say if things crop up then as long as I can trust the person they can work at home although he prefers people in the office. The team uses SCRUM, TDD and SOLID design principles so they are quite up to date in technology. They are reasonably Microsoft focused. It appears the Technical Director might be the R&D man and research new technology on his own not allowing developers to play with new technology. He possibly might be a super developer and makes all the decisions that no can argue with. They are currently moving to Entity Framework away from NHibernate based on issues that their queries seem to fail sometimes and they feel NHibernate is stagnant. They have analysts and a QA team. The MD is focused and they are an expanding company making profit each year. I'm not sure what the team morale is and whether they have a laugh. When I had a tour around the office they were there in dead silence. I'm really unsure which role is the best for me and going with my gut instinct is useless as I'm not sure what my gut is telling me. Based on the information above which role would you choose and why?

    Read the article

  • Types in Lisp and Scheme

    - by user2054900
    I see now that Racket has types. At first glance it seems to be almost identical to Haskell typing. But is Lisp's CLOS covering some of the space Haskell types cover? Creating a very strict Haskell type and an object in any OO language seems vaguely similar. It's just that I've drunk some of the Haskell kool-aid and I'm totally paranoid that if I go down the Lisp road, I'll be screwed due to dynamic typing.

    Read the article

  • Am I violating LSP if the condition can be checked?

    - by William
    This base class for some shapes I have in my game looks like this. Some of the shapes can be resized, some of them can not. private Shape shape; public virtual void SetSizeOfShape(int x, int y) { if (CanResize()){ shape.Width = x; shape.Height = y; } else { throw new Exception("You cannot resize this shape"); } } public virtual bool CanResize() { return true; } In a sub class of a shape that I don't ever want to resize I am overriding the CanResize() method so a piece of client code can check before calling the SetSizeOfShape() method. public override bool CanResize() { return false; } Here's how it might look in my client code: public void DoSomething(Shape s) { if(s.CanResize()){ s.SetSizeOfShape(50, 70); } } Is this violating LSP?

    Read the article

  • Good practices when writing a parser for a standard file format (such as ePub)

    - by J-F L-R
    I am considering writing an Android reader software that can read ePubs and display them. I checked the ePub standard documents. However, these contain a lot of information. So I am wondering what is the process of implementing a standard for a file format. What are the steps to get a working implementation without passing by parts of the standard? Are there any best practices? Also, is it even possible to program this alone in a reasonable time? From what I have already found out, ePub is basically a zip archive. That means I could probably use zlib to decompress it. The content is in XHTML and CSS, so I believe it should be possible to display it in a WebView. The parts that are missing are writing the code that can read the metadata and manage the non-standard XHTML extensions.

    Read the article

  • Should I sacrifice code succintness to ensure the narrowest variable scope? [duplicate]

    - by David Scholefield
    This question already has an answer here: Is the usage of internal scope blocks within a function bad style? 3 answers In many languages (e.g. both Perl and Java - which are the two languages I work most with) it is possible to narrow the scope of local variables by declaring them within a block. Although it adds extra code length (the opening and closing block braces), and possibly reduces readability, should I create blocks purely to narrow the scope of variables to the statements that use the variables and to uphold the principle of narrowest scope or does this sacrifice succinctness and readability just to unnecessarily uphold an agreed 'best practice' principle? I usually declare local variables to functions/methods at the start of the function to aid readability, but I could not do this, and just create blocks throughout the function and declare the variables throughout the code - within those blocks - to narrow their scope.

    Read the article

< Previous Page | 388 389 390 391 392 393 394 395 396 397 398 399  | Next Page >