Search Results

Search found 15103 results on 605 pages for 'programmers notepad'.

Page 248/605 | < Previous Page | 244 245 246 247 248 249 250 251 252 253 254 255  | Next Page >

  • Is "code that generates code" really all that great?

    - by Jaxo
    I was looking through CodePen's "popular pens" and I noticed this cool little spiral animation somebody made with a seemingly ridiculously small amount of code. This is quite impressive until you click the headings for HTML and CSS to show the "compiled" versions of the same code. Suddenly the 3 lines of HAML and ~40 lines of SCSS turns into a gigantic monster of repetition. Here's where my question comes in: Is it acceptable to do something like this in practice? Don't get me wrong - I love using preprocessors to help me write code faster, but in some cases it looks like it's an automatic copy-paste machine.

    Read the article

  • Implementing the transport layer for a SIP UAC

    - by Jonathan Henson
    I have a somewhat simple, but specific, question about implementing the transport layer for a SIP UAC. Do I expect the response to a request on the same socket that I sent the request on, or do I let the UDP or TCP listener pick up the response and then route it to the correct transaction from there? The RFC does not seem to say anything on the matter. It seems that especially using UDP, which is connection-less, that I should just let the listeners pick up the response, but that seems sort of counter intuitive. Particularly, I have seen plenty of UAC implementations which do not depend on having a Listener in the transport layer. Also, most implementations I have looked at do not have the UAS receiving loop responding on the socket at all. This would tend to indicate that the client should not be expecting a reply on the socket that it sent the request on. For clarification: Suppose my transport layer consists of the following elements: TCPClient (Sends Requests for a UAC via TCP) UDPClient (Sends Requests for a UAC vid UDP) TCPSever (Loop receiving Requests and dispatching to transaction layer via TCP) UDPServer (Loop receiving Requests and dispatching to transaction layer via UDP) Obviously, the *Client sends my Requests. The question is, what receives the Response? The *Client waiting on a recv or recvfrom call on the socket it used to send the request, or the *Server? Conversely, the *Server receives my requests, What sends the Response? The *Client? doesn't this break the roles of each member a bit?

    Read the article

  • Reasons for either 32-bit or 64-bit as development machine

    - by vartec
    I'm about to make a new Linux install, which will be primarily used for programming. I've seen benchmarks showing speed improvement of 64-bit version, however, I have hard time of telling how much these benchmarks translate to improvement in every day usage. And of course there are other aspects to consider. Usage I have in mind: mainly programming Python, with occasional C, C++ and Java; IDEs, which are using Java platforms (Eclipse and IntelliJ); on very rare occasions having to compile for 32-bit platform; not planning to have more than 64GB of RAM anytime soon (and I don't mind using PAE kernels); machine in question has 4GB RAM and Athlon II X2; What are pros and cons of choosing either i386 or x86_64 distro?

    Read the article

  • Concurrency with listeners and upload: is this solution sound?

    - by cbrulak
    I'm working on an Android app. We have listeners for position data and camera capture which is timed on a background thread (not a service). The timer thread dictates when the image is captured and also when the data is cached in a local sqlite db. So, my question is how to properly store the location data as it comes in based on the listeners and pull that data so that the database can be updated as the camera capture is executed. I can't put the location data into the database as it arrives because it is processed more frequently than the camera images and the camera images are what dominates the architecture at the moment. (Location data is supplement). However I need the location data to get a rough location of the where the image is captured. My first thought was to have singleton store the location data. And have a semaphore on the getInstance method so if the database update is happening (after an image is captured) we don't have an error. The location data can wait for the database update or it can be lost for that particular event it doesn't really matter. What are you thoughts? Am I on the right track? (And is this the right sub-site or would this be better on stackoverflow?)

    Read the article

  • Access functions from user control without events?

    - by BornToCode
    I have an application made with usercontrols and a function on main form that removes the previous user controls and shows the desired usercontrol centered and tweaked: public void DisplayControl(UserControl uControl) I find it much easier to make this function static or access this function by reference from the user control, like this: MainForm mainform_functions = (MainForm)Parent; mainform_functions.DisplayControl(uc_a); You probably think it's a sin to access a function in mainform, from the usercontrol, however, raising an event seems much more complex in such case - I'll give a simple example - let's say I raise an event from usercontrol_A to show usercontrol_B on mainform, so I write this: uc_a.show_uc_b+= (s,e) => { usercontrol_B uc_b = new usercontrol_B(); DisplayControl(uc_b); }; Now what if I want usercontrol_B to also have an event to show usercontrol_C? now it would look like this: uc_a.show_uc_b+= (s,e) => { usercontrol_B uc_b = new usercontrol_B(); DisplayControl(uc_b); uc_b.show_uc_c += (s2,e2) => {usercontrol_C uc_c = new usercontrol_C(); DisplayControl(uc_c);} }; THIS LOOKS AWFUL! The code is much simpler and readable when you actually access the function from the usercontrol itself, therefore I came to the conclusion that in such case it's not so terrible if I break the rules and not use events for such general function, I also think that a readable usercontrol that you need to make small adjustments for another app is preferable than a 100% 'generic' one which makes my code look like a pile of mud. What is your opinion? Am I mistaken?

    Read the article

  • Dynamically load and call delegates based on source data

    - by makerofthings7
    Assume I have a stream of records that need to have some computation. Records will have a combination of these functions run Sum, Aggregate, Sum over the last 90 seconds, or ignore. A data record looks like this: Date;Data;ID Question Assuming that ID is an int of some kind, and that int corresponds to a matrix of some delegates to run, how should I use C# to dynamically build that launch map? I'm sure this idea exists... it is used in Windows Forms which has many delegates/events, most of which will never actually be invoked in a real application. The sample below includes a few delegates I want to run (sum, count, and print) but I don't know how to make the quantity of delegates fire based on the source data. (say print the evens, and sum the odds in this sample) using System; using System.Threading; using System.Collections.Generic; internal static class TestThreadpool { delegate int TestDelegate(int parameter); private static void Main() { try { // this approach works is void is returned. //ThreadPool.QueueUserWorkItem(new WaitCallback(PrintOut), "Hello"); int c = 0; int w = 0; ThreadPool.GetMaxThreads(out w, out c); bool rrr =ThreadPool.SetMinThreads(w, c); Console.WriteLine(rrr); // perhaps the above needs time to set up6 Thread.Sleep(1000); DateTime ttt = DateTime.UtcNow; TestDelegate d = new TestDelegate(PrintOut); List<IAsyncResult> arDict = new List<IAsyncResult>(); int count = 1000000; for (int i = 0; i < count; i++) { IAsyncResult ar = d.BeginInvoke(i, new AsyncCallback(Callback), d); arDict.Add(ar); } for (int i = 0; i < count; i++) { int result = d.EndInvoke(arDict[i]); } // Give the callback time to execute - otherwise the app // may terminate before it is called //Thread.Sleep(1000); var res = DateTime.UtcNow - ttt; Console.WriteLine("Main program done----- Total time --> " + res.TotalMilliseconds); } catch (Exception e) { Console.WriteLine(e); } Console.ReadKey(true); } static int PrintOut(int parameter) { // Console.WriteLine(Thread.CurrentThread.ManagedThreadId + " Delegate PRINTOUT waited and printed this:"+parameter); var tmp = parameter * parameter; return tmp; } static int Sum(int parameter) { Thread.Sleep(5000); // Pretend to do some math... maybe save a summary to disk on a separate thread return parameter; } static int Count(int parameter) { Thread.Sleep(5000); // Pretend to do some math... maybe save a summary to disk on a separate thread return parameter; } static void Callback(IAsyncResult ar) { TestDelegate d = (TestDelegate)ar.AsyncState; //Console.WriteLine("Callback is delayed and returned") ;//d.EndInvoke(ar)); } }

    Read the article

  • Github Workflow: Pushing small fix branches to remote, or keep them local?

    - by Isaac Hodes
    In Scott Chacon's workflow (explained eg in this SO answer), with essentially two silos (development, and master), if, say I have a small bug to fix (e.g. can be fixed with a few characters) is the optimal way of doing that: a) branch off of development a branch called e.g. fix_123. Push this branch to origin as I work on it. When it's done, code-reviewed, whatever, merge into development and push development to origin. b) Same as above, but without pushing fix_123 to origin.

    Read the article

  • How to wrap console utils in webserver

    - by Alex Brown
    I have a big dataset (100Mbs/day) and a bunch of console a TCL/TK tools to view it - I want to turn it into a web app that I can build, and others can maintain. In long: my group runs simulations yielding 100s of Mbs of data daily, in multiple (mostly but not only) text forms. We have a bunch of scripts and tools, mostly old school 1990's style stuff requiring a 5-button mouse, as well as lots of ad-hoc scripts that engineers build out of frustration every month or so. These produces UIs, graphs, spreadsheets (various sizes), logs, event histories etc. I want to replace (or at least supplement) the xwindows / console style UI with a web-based one, so I need the following properties: pleasant to program can wrap existing command-line tools in separate views (I don't need to scrape GUIs or anything) as I port logic from the existing scripts I can create a modularised and pleasant codebase to replace it I can attach a web-ui to navigate between views - each view is likely to contain keys which might make sense to view in another I am new to building systems that have logic on the back-end and front-end of a web-server. from that point of view, they do this: backend wraps old-school executables, constructs calls into them and them takes the output and wraps it up, niceifies it and delivers it to the web client. For instance the tool might generate a number of indexed images (per invocation) which I might deliver all at once or on-demand. May (probably) need to to heavy stats on some sources. frontend provides navigation connecting multiple views, performs requests from one view for data from another (or self to self), etc. Probably will have some views with a lot of interactivity. Can people please point me towards viable solutions for this? I know it's a bit of an open question so as answers come in I hope to refine the spec until we have a good match. I guess I expect to see answers like "RoR!" "beans!" "Scala!" but please give an indication of why those are a good fit; I know nothing! I got bumped off SO for asking an open-ended question, so sorry if its OT here too (let me know). I take the policy that I use the best/closest matched language for a project but most of my team are extremely low level (ie pipeline stages and CDyn) so I don't have the peer group to know where to start.

    Read the article

  • Can a NodeJS webserver handle multiple hostnames on the same IP?

    - by Matthew Patrick Cashatt
    I have just begun learning NodeJS and LOVE it so far. I have set up a Linux box to run it and, in learning to use the event-driven model, I am curious if I can use a common IP for multiple domain names. Could I point, for example, www.websiteA.com, www.websiteB.com, and www.websiteC.com all to the same IP (node webserver) and then route to the appropriate source files based on the request? Would this cause certain doom when it came to scaling to any reasonable size?

    Read the article

  • Preparing for interview questions involving scale

    - by Chaitanya
    I have over 6 years of software development experience. I have worked on multiple platforms, including mobile. However, I have not had a chance of working on scale-related issues. As a consequence, whenever someone asks a question involving a million inputs in an interview, I find myself out of depth. How do I prepare for such questions? Any books/resources to refer to? There are books for Java and Data Structures and Concurrency, but I don't know about any definitive ones to learn about scaling.

    Read the article

  • Minimizing data sent over a webservice call on expensive connection

    - by aceinthehole
    I am working on a system that has many remote laptops all connected to the internet through cellular data connections. The application will synchronize periodically to a central database. The problem is, due to factors outside our control, the cost to move data across the cellular networks are spectacularly expensive. Currently the we are sending a compressed XML file across the wire where it is being processed and various things are done with (mainly stuffing it into a database). My first couple of thoughts were to convert that XML doc to json, just prior to transmission and convert back to XML just after receipt on the other end, and get some extra compression for free without changing much. Another thought was to test various other compression algorithms to determine the smallest one possible. Although, I am not entirely sure how much difference json vs xml would make once it is compressed. I thought that their must be resources available that address this problem from an information theory perspective. Does anyone know of any such resources or suggestions on what direction to go in. This developed on the MS .net stack on windows for reference.

    Read the article

  • Restrictive routing best practices for Google App Engine with python?

    - by Aleksandr Makov
    Say I have a simple structure: app = webapp2.WSGIApplication([ (r'/', 'pages.login'), (r'/profile', 'pages.profile'), (r'/dashboard', 'pages.dash'), ], debug=True) Basically all pages require authentication except for the login. If visitor tries to reach a restrictive page and he isn't authorized (or lacks privileges) then he gets redirected to the login view. The question is about the routing design. Should I check the auth and ACL privs in each of the modules (pages.profile and pages.dash from example above), or just pass all requests through the single routing mechanism: app = webapp2.WSGIApplication([ (r'/', 'pages.login'), (r'/.+', 'router') ], debug=True) I'm still quite new to the GAE, but my app requires authentication as well as ACL. I'm aware that there's login directive on the server config level, but I don't know how it works and how I can tight it with my ACL logic and what's worse I cannot estimate time needed to get it running. Besides, it looks only to provide only 2 user groups: admin and user. In any case, that's the configuration I use: handlers: - url: /favicon.ico static_files: static/favicon.ico upload: static/favicon.ico - url: /static/* static_dir: static - url: .* script: main.app secure: always Or I miss something here and ACL can be set in the config file? Thanks.

    Read the article

  • Choosing a crossplatform for mobile development

    - by Mech0z
    I am creating a enterprise project and will develop the app for WP, Android and IPhone (Maybe also tablets) I have then done some research of what solutions are out there and have narrowed my choice down to 3 platforms (Due to my requirement to work with Bluetooth) The biggest requirement other than Bluetooth is the need to create good interfaces and module programming so its easy to maintain the whole solution, other than that I would like to use C# but its not a real requirement, but if the difference between 2 platforms is very small then it might tip the scale. Mono (MonoTouch, Mono for android) PhoneGap RhoMobile From my understanding then PhoneGap is not suited for business apps and I am not entirely sure why, but it seems like a platform made for speed rather than long term developement, not sure how true this is. RhoMobile is made for enterprice and might suit my needs, but not sure if its a good platform Anyone with insight that care to share their opinion Mono is C#, seems to be very mature and I found MvvmCross which should help organize the project

    Read the article

  • Why does Javascript use JSON.stringify instead of JSON.serialize?

    - by Chase Florell
    I'm just wondering about "stringify" vs "serialize". To me they're the same thing (though I could be wrong), but in my past experience (mostly with asp.net) I use Serialize() and never use Stringify(). I know I can create a simple alias in Javascript, // either JSON.serialize = function(input) { return JSON.stringify(input); }; // or JSON.serialize = JSON.stringify; http://jsfiddle.net/HKKUb/ but I'm just wondering about the difference between the two and why stringify was chosen. for comparison purpose, here's how you serialize XML to a String in C# public static string SerializeObject<T>(this T toSerialize) { XmlSerializer xmlSerializer = new XmlSerializer(toSerialize.GetType()); StringWriter textWriter = new StringWriter(); xmlSerializer.Serialize(textWriter, toSerialize); return textWriter.ToString(); }

    Read the article

  • Where can I find programming work online ?

    - by explorest
    I have setup an ideal, quiet, non-interrupting environment at home. I am extremely productive here. I dont want to leave my home, not my room, not even my couch. How/where do I find work online so that I don't have to travel to it? Kindly post about your own personal experiences. Have you done it full time from home? Where and how? I am outside United States in a third world country so a lower pay is not an issue. The issue is the work-enviroment.

    Read the article

  • Version control implementation advice on legacy websites?

    - by Eric
    Assuming no experience with version control systems, just local to live web development. I've been dropped in on a few legacy website projects, and want an easier and more robust way to be able to quickly push and revert changes en masse. I'm currently the only developer on these projects, but more may be added in the future and I think it would be beneficial to set up a system that others can use.

    Read the article

  • Linux distro for software development support?

    - by Xie Jilei
    I've spent too much time on setup & maintain a development server, which contains following tools: Common services like SSH, BIND, rsync, etc. Subversion, Git. Apache server, which runs CGit, Trac, Webmin, phpmyadmin, phppgadmin, etc. Jetty, which runs Archiva and Hudson. Bugzilla. PostgresSQL server, MySQL server. I've created a lot of Debian packages, like my-trac-utils, my-bugzilla-utils, my-bind9-utils, my-mysql-utils, etc. to make my life more convenient. However, I still feel I need a lot more utils. And I've spent a lot of time to maintain these packages, too. I think there maybe many developers doing the same things. As tools like subversion, git, trac are so common today. It's not to hard to install and configure each of them, but it took a long time to install them all. And it's time consuming to maintain them. Like backup the data, plot the usage graph and generate web reports. (gitstat for example) So, I'd like to hear if there exist any pre-configured distro for Development Server purpose, i.e., something like BackTrack for hackers?

    Read the article

  • What's the proper way to merge two projects in source control software

    - by Mallow
    I'm using Fossil-SCM to maintain my projects. Since I don't work in a team I usually have just a very linear branch of development: 1.0 - 1.1 - 1.2 I'm wondering what the procedure is when you have one project who's task is about to be given to a related project. And thereby rendering the first project obsolete. Although I tend to rewrite most of my code if I don't remember having already written it, I still would like to keep the code archived. And I'ld rather not have a fossil repo that just is dead. Can I merge it? Is that the proper way of handling this? For example the code was extracting data from an excel file in order to format an HTML page. Now, I've convinced my employer to move their excel spreadsheet into a database to decrease redundancy, increase efficiency and yaddy yadda. Since I can now make logical queries that don't have to jump hoops to preform using the database I won't need the extra vbs files that originally manipulated the excel file. Technically I would be porting part of the existing code into the current new project. Since it already has it's own trunk, would it be advisable to combine the trunk of a different project to this one, and how would I do that exactly?? SO I guess my tree would look like this, and I haven't seen examples of software branching that resemble this inverted tree before so I'm wondering what the norm for a situation like this?

    Read the article

  • Delivery terminology and order of magnitude

    - by Peter Turner
    What is the standard way of describing how software products are released and the proportionate order of magnitude to which the changes relative to the software product are conveyed? Is Release Update Patch Bug Fix redundant? or Is Update Patch too terse? As an end user I'd think that all bug fixes are patches (insofar as they are not 100% new code) and all patches should be updates (insofar as they don't degrade the product) and all updates should be releases (insofar as they are actually released), but this really doesn't help anyone understand why they need to get them. Then, if the person who makes the software change appends "critical" or "zero-day" in the notes, I would be unwise to leave the changes unapplied.

    Read the article

  • What effects do various drugs have on coding style / productivity? [closed]

    - by codecraft
    Can anyone tell me what the effect of various drugs are on coding style, and if coding on drugs can be more productive, or more fun? Are some types of drugs better suited to certain tasks and phases of software development? And which programming languages are best suited to coding on drugs? It would be great if you could back up your answers by data, probably even code snippets showcasing the effect of the drug experience.

    Read the article

  • My first development job working at a company, what things to look out for?

    - by Kim Jong Woo
    So I've worked on my own all this time, selling software, creating a few web applications on my own. I had an Arts background I was self taught. It was a bit difficult to find a development position after endless trying, I finally landed a LAMP position. What I realized was it was all confidence issue. Before when I didn't know a few things I panicked but after spending such a long time working on my own projects and solving various problems, I felt confident enough that I could fulfill requirements on my own. I hope this helps other people applying for jobs This is the first time I will be developing with other team members in an office, are there anything I should prepare for my first day at work next week? Any tips and pointers while working as a developer at a company? I'm kinda nervous but excited.

    Read the article

  • Would a professional, self taught programmer benefit from reading an algorithms book?

    - by user65483
    I'm a 100% self taught, professional programmer (I've worked at a few web startups and made a few independent games). I've read quite a few of the "essential" books (Clean Code, The Pragmatic Programmer, Code Complete, SICP, K&R). I'm considering reading Introduction to Algorithms. I've asked a few colleagues if reading it will improve my programming skills, and I got very mixed answers. A few said yes, a few said no, and a one said "only if you spend a lot of time implementing these algorithms" (I don't). So, I figured I'd ask Stack Exchange. Is it worth the time to read about algorithms if you're a professional programmer who seldom needs to use complex algorithms? For what it's worth, I have a strong mathematical background (have a 2 year degree in Mathematics; took Linear Algebra, Differential Equations, Calc I-III).

    Read the article

  • What tasks should be explicitly mentioned in a job reference? [closed]

    - by Martin
    Glossary A job reference (see also the german version) is a letter from the (former) employer that states what the employee did, and how well he did it. There are oh so weird rules here on how to phrase stuff therein, but this is not what this question is about. Question I hope this can even be generally answered, but even if country/region specific, I think there is enough international know-how on this site to get useful answers for different regions. I was wondering how detailed the tasks a programmer / developer did should be spelled out in a job reference. (After all, they can be spelled out in all detail in a CV when applying for a new job.) So how much detail is usual for a job reference? Example Developed Windows applications in C++ or Developed Windows Desktop Applications using C++ with MS Visual Studio 2005 and MFC, utilising Boost 1.47 and specif library xyz, focusing on subsystem abc for numerical calculations of ... etc. What makes more sense?

    Read the article

  • Is anyone doing "real" TDD with Visual-C++, and if yes, how do they do it?

    - by Martin
    Test Driven Development implies writing the test before the code and following a certain cycle: Write Test Check Test (run) Write Production Code Check Test (run) Clean up Production Code Check test (run) As far as I'm concerned, this is only possible if your development solution allows you to very quickly switch between the production and test code, and to execute the test for a certain production code part extremely quickly. Now, while there exist lots of Unit Testing Frameworks for C++ (I'm using Bost.Test atm.) it does seem that there doesn't really exist any decent (for native C++) Visual Studio (Plugin) solution that makes the TDD cycle bearable regardless of framework used. "Bearable" means that it's a one-click action to run a test for a certain cpp file without having to manually set up a separate testing project etc. "Bearable" also means that a simple test starts (linking!) and runs very quickly. So, what tools (plugins) and techniques are out there that make the TDD cycle possible for native C++ development with Visual Studio? Note: I'm fine with free or "commercial" tools. Please: No framework recommendations. (Unless the framework has a dedicated Visual Studio plugin and you want to recommend the plugin.) Edit Note: The answers so far have provided links on how to integrate a Unit Testing framework into Visual Studio. The resources more or less describe how to get the UT framework to compile and get your first Tests running. This is not what this question is about. I'm of the opinion that to really work productively, having the Unit Tests in a manually maintained(!), separate vcproj from your production classes will add so much overhead that TDD "isn't possible". As far as I am aware, you do not add extra "projects" to a Java or C# thing to enable Unit Tests and TDD, and for a good reason. This should be possible with C++ given the right tools, but it seems (this question is about) that there are very little tools for TDD/C++/VS. Googling around, I've found one tool, VisualAssert, that seems to aim in the right direction. However, afaiks, it doesn't seem to be in widespread use (compared to CppUnit, Boost.Test etc.). Edit: I would like to add a comment to the context for this question. I think it does a good summary of outlining (part of) the problem: (comment by Billy ONeal) Visual Studio does not use "build scripts" that are reasonably editable by the user. One project produces one binary. Moreover, Java has the property that Java never builds a complete binary -- the binary you build is just a ZIP of the class files. Therefore it's possible to compile separately then JAR together manually (using e.g. 7z). C++ and C# both actually link their binaries, so generally speaking you can't write a script like that. The closest you can get is to compile everything separately and then do two linkings (one for production, one for testing).

    Read the article

  • How do you manage feature requests and software changes?

    - by 0A0D
    I am a Software Engineer and over the past few years I have become the de-facto software project manager simply because there isn't one. So to keep our sanity in the R&D/Engineering department, customers have become accustomed to coming to me with their requests. I have no experience in this realm so it is my first time acting as a project manager for software projects. I have managed other things but not software. So, how do you manage software projects and mark priorities? Requests come in at infrequent intervals so we very well could be working on something for someone else and then another person comes in with a "rush" job that needs working on. Is it easier to just say First Come, First Serve or is it the person with the most money?

    Read the article

< Previous Page | 244 245 246 247 248 249 250 251 252 253 254 255  | Next Page >