Daily Archives

Articles indexed Monday November 11 2013

Page 12/19 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • Basic questions while making a toy calculator

    - by Jwan622
    I am making a calculator to better understand how to program and I had a question about the following lines of code: I wanted to make my equals sign with this C# code: private void btnEquals_Click(object sender, EventArgs e) { if (plusButtonClicked == true) { total2 = total1 + Convert.ToDouble(txtDisplay.Text); //double.Parse(txtDisplay.Text); } else if (minusButtonClicked == { total2 = total1 - double.Parse(txtDisplay.Text) } } txtDisplay.Text = total2.ToString(); total1 = 0; However, my friend said this way of writing code was superior, with changes in the minus sign. private void btnEquals_Click(object sender, EventArgs e) { if (plusButtonClicked == true) { total2 = total1 + Convert.ToDouble(txtDisplay.Text); //double.Parse(txtDisplay.Text); } else if (minusButtonClicked == true) { double d1; if(double.TryParse(txtDisplay.Text, out d1)) { total2 = total1 - d1; } } txtDisplay.Text = total2.ToString(); total1 = 0; My questions: 1) What does the "out d1" section of this minus sign code mean? 2) My assumption here is that the "TryParse" code results in fewer systems crashes? If I just use "Double.Parse" and I don't put anything in the textbox, the program will crash sometimes right?

    Read the article

  • Are "Compile to JavaScript" Frameworks Hostile to Continuous Integration?

    - by joshin4colours
    Lately we've been looking at ways to improve automated testing and related tooling of our enterprise-level GWT web app. I've realized that in some ways, GWT is a bit hostile to automated testing, mainly because of the nature of the long GWT compile times from Java to JS. This makes unit testing somewhat challenging, but it also puts some roadblocks up for testing in a CI environment. I've also found out that some of our build and deployment processes are somewhat complicated due to the nature of GWT's compile process. Is this a general problem for "compile to JS" frameworks for webapps? I don't have much experience with them, but I can see some potential problems for automated testing and continuous integration and deployment. Some issues I see: Long build and compile times preventing quick deployments Language the app is developed in != JS, preventing good unit testing Obfuscated JS in the actual app makes it more like a executable than a web app Are these issues present in other similar frameworks, or is this more a GWT issue?

    Read the article

  • what is the best age for programmer to hire [on hold]

    - by Mohamed Ahmed
    I'm graduated from Information systems institute since 2004 and I worked as a ICDL Instructor , but I know some SQL Server good and Database Design , now I'm in 30 age , and I want to start study computer programming and get MCSA SQL Server and MCSE certificates , but I have feel I'm old to start and the companies will not accept me for that reason and also because I don't have any experience yet in the field , I will start like a fresh graduated in 21 or 22 age , please help me what is the best age for programmer for accepted , and the age and late start will be a big obstacle for me or not

    Read the article

  • Fastest way to check if two square 2D arrays are rotationally and reflectively distinct

    - by kustrle
    The best idea I have so far is to rotate first array by {0, 90, 180, 270} degrees and reflect it horizontally or/and vertically. We basically get 16 variations [1] of first array and compare them with second array. if none of them matches the two arrays are rotationally and reflectively distinct. I am wondering if there is more optimal solution than this brute-force approach? [1] 0deg, no reflection 0deg, reflect over x 0deg, reflect over y 0deg, reflect over x and y 90deg, no reflection ...

    Read the article

  • Will learning wxpython worth it in future? [on hold]

    - by user108437
    As we know that microsoft has been pushing Windows 8.1 which strongly uses XAML to design the app and for windows desktop mode WPF is another framework (which probably some thinks it fails) However, in old times, developer write windows form software using MFC or something alike that they have to do their own main loop, etc, etc, and I recently loves python and learning python certainly worth it, since there are still ironpython out there that uses .NET, but I am not sure whether my move to also learn wxpython for building windows software that does not requires .NET worth it also i notice wxpython is somehow old and still uses python 2.7, while today, python already version 3.3, beside that the books are old book published in 2007, and there seems no much hype on building windows form without .NET anymore because .NET is mostly preinstalled in new windows version. So my humble question is, whether should I learn python + wxpython or only python? Is there any benefit that I might not notice for capable in writing windows application that does not use .NET?

    Read the article

  • Routing Internet traffic over specific network interfaces [on hold]

    - by dipamchang
    I want to route my internet traffic over all my available connections (like LAN and Data card(3G)), based on conditions like, if a website is blocked over LAN, that traffic goes through Data Card (or other available internet connection). My ultimate motive is to integrate this feature in my web browser which I have already built using C# and .Net framework. I have found that one can add a route by using the following cmd command - route add DestinationIP mask subnet InterfaceGatewayIP but I am stuck as to how should it be implemented using C#?

    Read the article

  • Choosing a CMS to use with backend modules involving haskell and python [on hold]

    - by Butterflycode
    Hi I am trying to decide on a CMS to use for a new project. Security is the most important element of the CMS. I am looking to use a PHP based CMS such as Joomla or Drupal however, PHP has many security flaws which worries me. The data which needs to be secure will be inside a database and relate to account information. I am wondering what is the best way to do this? What I am wanting is a frontend which is made in php/js(joomla) and then I have a backend api which is written in Haskell to handle money transfers ensuring nothing goes wrong. In between the two I want a controller written in perhaps Python or C. I never want the php to touch the database. I want it to relay messages to the controller that's written in python or C and then it inputs to the database, sanitising data etc Am I perhaps thinking too deeply about this? Just wondering if anyone has any ideas on what I should do.... I can't quite explain what the project is as I don't want the idea to be stolen, but it has a lot money transactions involved so security is essential.

    Read the article

  • How to ensure that a member variable is initialized before calling a class method

    - by Omkar Ekbote
    There's a class with a parametrized constructor that initializes a member variable. All public methods of the class then use this member variable to do something. I want to ensure that the caller always creates an object using the parametrized constructor (there is also a setter for this member variable) and then call that object's methods. In essence, it should be impossible for the caller to call any method without setting a value to the member variable (either by using the parametrized constructor or the setter). Currently, a caller can simply make an object using the default constructor and then call that object's method - I want to avoid checking whether or not the member variable is set in each and every one of the 20-odd methods of the class (and throw an exception if it is not). Though a runtime solution is acceptable (better than the one I mentioned above); a compile-time solution is preferable so that any developer will not be allowed to make that mistake and then waste hours debuggging it!

    Read the article

  • Good fix vs Quick fix [duplicate]

    - by Andrea Girardi
    This question already has an answer here: Does craftsmanship pay off? [duplicate] 16 answers Good design: How much hackyness is acceptable? [duplicate] 9 answers How do you balance between “do it right” and “do it ASAP” in your daily work? 14 answers Let's start from this principle: quality is a feature that you can't add to a project in the middle of the development process. This is the scenario: two weeks to go live with my project and, one of the developers added a specific method used only for one web application to our framework (Our framework is a bounce of java classes used to extract content from MongoDB, Alfresco, mySql and it's used by web applications). I'm the team leader and I told him to generalize the method to keep the framework to keep reusable but he said "no, I prefer don't do that because there are a lot of bugs that need to be fixed". The manager is agree with him and of course I'm not. Is it better to made extra effort to keep a framework free from any specific implementation (probably used only by one web application) or just add the methods because it works? So, my question is: is it correct to write code that only works or is better to write code that works but it doesn't sucks (i.e. adding embedded value, specific methods, extra classes, add column to database, etc)? How is it possible to justify the extra time (to be honest, this kind of fix requires 10 minutes extra to write a good generic code) to the management? How is possible to argue it's the right way to write code to young developers and PM? in general, good fix or quick fix? Ah, 10 minutes after I get the email from PM, he asked me why on a url of application 2 there was the name of application 1 during the login? I like to quote Jeff Atwood: "Don't leave "broken windows" (bad designs, wrong decisions, or poor code) unrepaired. Fix each one as soon as it is discovered. " Excerpt From: Hyperink. "How-To-Stop-Sucking-And-Be-Awesome-Instead." iBooks.

    Read the article

  • batch: comparing filenames and renaming [migrated]

    - by user2978770
    i'm new to both this platform and batch programming and i'm slowly but steadily driving crazy :-( I'm studying in Germany and just started on a bigger project that mainly consists of analyzing data and finding algorithms in order to maintain a certain function of a system. In order to get started i got a bunch of recorded data that, unfortunately, is not consistent when i comes to naming. Normally all files (all in one folder) should start with SPY.SPYNODE.SIDE and then go on with the specific names for each values or variables. However, the data logger messed it up a couple of times and gives weird names like SP0E1A~1.csv (all files are .csv-files). An that's when i figured in stead of renaming a couple of thousand files manually i could "easily" use a simple batch file to do that job for name. And that's exactly when I started to go crazy :-) So far i came up with the following: FOR /R %%i in (%CD%) DO ( set file1=%%i if not %file1%=="SPY.SPYNODE.SIDE" DO ( set /p "filename" < %file1% rename %file1% %filename% ) ) So what i want it to do is this (in pseudo) look through the whole folder and every file save the filename in variable file1 if file1 partially equals SPY.SPYNODE.SIDE open the file and save the first line (which contains the correct name of the file) in variable filename rename the file with the correct filename But so far it doesn't really work and i don't know why. Could anybody give me a hint or some advice how i should proceed? I really appreciate any kind of help!

    Read the article

  • Strategies for Indexing Custom Fields in RavenDB

    - by Adrian Thompson Phillips
    In the relational database world, if I was developing a CRM system and wanted to have the user add their own custom fields that are searchable, I could have tables that store the name of the new column, the data type and the value, etc. (which would be less inefficient to index) or I could use the less elegant (but more searchable) solution that software like Dynamics and SharePoint use, whereas I create a load of columns on my aggregate root called CustomInt1, CustomInt2, etc. (which looks dirty and has a limit of how many custom fields a user can have, but has indexing advantages). But my questions is this, in NoSQL databases, what would be the best way of achieving the same thing? My priority would be for searchability. So what would be the best way to store this data? If I used a predefined set of properties (i.e. CustomData1, CustomData2, etc.), because these are all stored as JSON (i.e. strings) in the database, does this make it simpler because I don't have to worry about data types?

    Read the article

  • MVC Design Pattern to Combine Multiple Models for use

    - by roverred
    In my design, I have multiple models and each model has a controller. I need to use all the models to process some operation. Most examples I see are pretty simple with 1 view, 1 controller, and 1 model. How would you get all these models together? Only ways I can think of are 1) Have a top-level controller which has a reference to every controller. Those controllers will have a getter/setter function for their model. Does this violate MVC because every controller should have a model? 2) Have an Intermediate class to combine every model into a one model. Then you create a controller for that new super model. Do you know of any better ideas? Thanks.

    Read the article

  • Writing generic code when your target is a C compiler

    - by enobayram
    I need to write some algorithms for a PIC micro controller. AFAIK, the official tools support either assembler or a subset of C. My goal is to write the algorithms in a generic and reusable way without losing any runtime or memory performance. And if possible, I would like to do this without increasing the development time much and compromising the readability and maintainability much either. What I mean by generic and reusable is that I don't want to commit to types, array sizes, number of bits in a bit field etc. All these specifications, IMHO, point to C++ templates, but there's no compiler for it for my target. C macro metaprogramming is another option, but, again my opinion, that greatly reduces readability and increases development time. I believe what I'm looking for is a decent C++ to C translator, but I'd like to hear anything else that satisfies the above requirements. Maybe a translator from another high-level language to C that produces very efficient code, maybe something else. Please note that I have nothing against C, I just wish templates were available in it.

    Read the article

  • in memory datastore in haskell

    - by Simon
    I want to implement an in memory datastore for a web service in Haskell. I want to run transactions in the stm monad. When I google hash table steam Haskell I only get this: Data. BTree. HashTable. STM. The module name and complexities suggest that this is implemented as a tree. I would think that an array would be more efficient for mutable hash tables. Is there a reason to avoid using an array for an STM hashtable? Do I gain anything with this stem hash table or should I just use a steam ref to an IntMap?

    Read the article

  • Search and Replace in MVC

    - by danip
    What would be a good MVC/OOP/GRASP/SOLID structure for a search/replace functionality. Methods: search/searchNext/replace/replaceAll. I'm interested only in the PHP arhitecture and how a professional developer would implement this in it's OWN FRAMEWORK. What names would you use for the classes? What subfolders would you used in your MODEL folder? How would you connect the MODELS/CONTROLLER? This is just a arhitecture question to understand better the principles of good OOP in practice. My current implementation is very simplistic using a service model: /controller/SearchReplaceController.php /models/services/SearchReplaceService.php The problem with this is I know I'm breaking SRP in the service but I found this somehow acceptable. Also creating a service does not feel like the best solution for this.

    Read the article

  • What's the best way to cache a growing database table for html generation?

    - by McLeopold
    I've got a database table which will grow in size by about 5000 rows a hour. For a key that I would be querying by, the query will grow in size by about 1 row every hour. I would like a web page to show the latest rows for a key, 50 at a time (this is configurable). I would like to try and implement memcache to keep database activity low for reads. If I run a query and create a cache result for each page of 50 results, that would work until a new entry is added. At that time, the page of latest results gets new result and the oldest results drops off. This cascades down the list of cached pages causing me to update every cache result. It seems like a poor design. I could build the cache pages backwards, then for each page requested I should get the latest 2 pages and truncate to the proper length of 50. I'm not sure if this is good or bad? Ideally, the mechanism I use to insert a new row would also know how to invalidate the proper cache results. Has someone already solved this problem in a widely acceptable way? What's the best method of doing this? EDIT: If my understanding of the MYSQL query cache is correct, it has table level granularity in invalidation. Given the fact that I have about 5000 updates before a query on a key should need to be invalidated, it seems that the database query cache would not be used. MS SQL caches execution plans and frequently accessed data pages, so it may do better in this scenario. My query is not against a single table with TOP N. One version has joins to several tables and another has sub-selects. Also, since I want to cache the html generated table, I'm wondering if a cache at the web server level would be appropriate? Is there really no benefit to any type of caching? Is the best advice really to just allow a website site query to go through all the layers and hit the database every request?

    Read the article

  • Dual boot (Win 7 & Ubuntu 13.10 clock problem

    - by peter
    I'm a "newbie" to Ubuntu, but I've been wrestling with this problem for several hours and don't seem to be able to solve it: When I set the time in Windows (Indianapolis, Eastern U.S. time zone) and then re-boot to Ubuntu, the computer time goes to Hawaiian time. When the time is set in Ubuntu, and the computer is rebooted to Windows the time is advanced by 5 hours. I've set the time in the BIOS, and it seems to make no difference. I've tried setting the time from "automatic" to "manual", all with the same result. Not a big problem, but it shows some underlying glitch. Could anyone explain?

    Read the article

  • Swap on Ubuntu: No primary partition

    - by 3l4ng
    I am running Ubuntu 13.10 64bit on a system with 4GB RAM, dual booting with Windows Most people say that it is good to have swap on a system, and results in speed, so I used it with my previous Ubuntu installations. In my new HDD, I use 3 primary partitions: 1 for Windows OS, 1 for Ubuntu and 1 for data. The windows system also took up one primary partition for system, and I have only 4 MBR slots. Effectively I have no primary partition for SWAP. I do not know it happened earlier, but back then I had a partition for swap as well My CURRENT disk partitioning looks like this: http://imgur.com/YMTr879 How can I create swap in my current setup?

    Read the article

  • Corsair Hydro i series cpu cooler fan control

    - by user214690
    Im relatively new at Ubuntu and have found an answer to basically every single issue ive ever had thru this site... Otherthan this. I have been toying with the idea of a Corsair h80i for my dual boot system (win7/U12.10) and mostly use it in ubuntu. I have done some research on the interweb regarding fan control in linux and nearly ran up short untill I came across this thread: http://ubuntuforums.org/archive/index.php/t-2096166.html And it seems to have worked around it. (altho I have not tested it for myself) Is there any program/library/source that can be used to control the fans without having to MacGuyver it??

    Read the article

  • Why do things change between using a LiveCD/LiveUSB and installing Ubuntu?

    - by ahow628
    Here have been a couple of weird experiences I've had with a Ubuntu LiveCD or LiveUSB: 1) I had one of the original Chromebooks (CR-48). I ended up wiping ChromeOS and installing only Ubuntu 12.04.0 just after it came out. It worked like a charm. About a year later, I broke something and reinstalled Ubuntu using 12.04.3 on a LiveUSB. The LiveUSB worked perfectly - screen resolution, wifi, trackpad all worked fine. I installed it (once installing updates, once stock from the USB drive) and both times screen resolution, wifi, and trackpad all broke. I ended up downloading 12.04.0 and installing it then upgrading to 12.04.3 after the fact and everything worked perfectly once again. 2) I purchased a Toshiba Portege z935 and the LiveUSB worked perfectly, namely the wifi. After install, wifi was extremely slow and basically couldn't load any pages. The answer was that Bluetooth conflicted somehow with wifi and Bluetooth had to be disabled to get wifi to work. Yet both could be enabled in the LiveUSB version, no problem. So my question is, why does this happen? Why does everything work perfectly from the LiveUSB version but then get broken when installed on the system? Is there a different way to install Ubuntu that would allow things to be installed over exactly as they were on the LiveUSB version (drivers, settings, etc)? Are there assumptions that the install makes that I could override somehow?

    Read the article

  • xbindkeys slow on Ubuntu 13.10

    - by 3l4ng
    I am using Ubuntu 13.10 64 bit on an Intel 15 with 4GB of RAM. I used xbindkeys for custom keyboard shortcuts in Ubuntu 13.04 because it was easy to configure with the GUI xbindkeys-config. Now I have setup the same on Ubuntu 13.10, and even a simple operation like opening a file using gedit seems to run slow. Reinstalling xbindkeys does not seem to solve the problem. Anyone has any ideas on what could be done, or any alternatives that are easy to configure?

    Read the article

  • My gnome terminal keep opening new window

    - by evan
    I actually want to change the default window position of gnome-terminal in my Ubuntu 12.04 system. After some search, I found some one else use the command gnome-terminal --geometry=120x80+50+50 to set the default position. And I actually don't know where to paste the command, so I pasted it to 'custome command' field of terminal's profile. Now when I open one terminal, it just keep opening new ones and I have no way to stop it other than ctrl+C. I even removed .gconf/gnome-termial/ folder and it didn't worked. Can someone help me?

    Read the article

  • Update to kernel 3.12 seems to fail: uname reports old rc7

    - by carlo
    I currently run Xubuntu 13.10 with kernel 3.12 rc7. Today I tried updating to the latest 3.12 kernel (non-rc), but this seems to fail. When installing the image and headers I see the following error passing by: ... run-parts: executing /etc/kernel/postinst.d/dkms 3.12.0-031200-generic /boot/vmlinuz-3.12.0-031200-generichis Error! The dkms.conf for this module includes a BUILD_EXCLUSIVE directive which does not match this kernel/arch. This indicates that it should not be built. ... After rebooting, when I do uname -r or cat /proc/version it tells me that I'm still running on the old rc7 kernel. Since my microphone wasn't working on my Sony Vaio Pro 13 I did download and install the latest ALSA drivers using the oem-audio-hda-daily-dkms package which seem to fix the problem (with the mic). Maybe this has something to do with it? I also tried removing the package using sudo apt-get purge oem-audio-hda-daily-dkms but no success.

    Read the article

  • Problem with Nvidia after update

    - by user214673
    After adding the most recent set of updates, I can't get into Ubuntu at all. I get a screen saying that I am running in low-graphics mode, 'your screen, graphics card and input device settings could not be detected correctly etc'. I can hit return and then get to the next screen saying 'what would you like to do?' however, i can't choose from the 4 options, the only keystroke that will register is escape, which takes me to a black screen with a login. I have an Nvidia GeForce 7050/nForce 610i and it has caused problems in the past, but I have always got round it by choosing to boot in recovery mode. Now, no matter which version I try to boot into I can't get into Ubuntu at all.

    Read the article

  • Can't boot - "Waiting for Network Configuration"

    - by user213017
    After an update on 13.10, my PC won't boot Ubuntu any longer. It displays the infamous "Waiting for Network Configuration" message and then hangs. I can go into recovery mode, and choose "Start networking" and then go to a root prompt, and that works fine. Ping works. /etc/network/interfaces contains just the two lines "auto lo" and "iface lo inet loopback". I've double-checked that my network is working, the cable is working (it works on another PC) and the network card seems to indicate a connection. Any suggestions on how to get my PC booted again? Right now I'm limited to a root shell prompt.

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >