Search Results

Search found 15103 results on 605 pages for 'programmers notepad'.

Page 354/605 | < Previous Page | 350 351 352 353 354 355 356 357 358 359 360 361  | Next Page >

  • Is there a canonical source supporting "all-surrogates"?

    - by user61852
    Background The "all-PK-must-be-surrogates" approach is not present in Codd's Relational Model or any SQL Standard (ANSI, ISO or other). Canonical books seems to elude this restrictions too. Oracle's own data dictionary scheme uses natural keys in some tables and surrogate keys in other tables. I mention this because these people must know a thing or two about RDBMS design. PPDM (Professional Petroleum Data Management Association) recommend the same canonical books do: Use surrogate keys as primary keys when: There are no natural or business keys Natural or business keys are bad ( change often ) The value of natural or business key is not known at the time of inserting record Multicolumn natural keys ( usually several FK ) exceed three columns, which makes joins too verbose. Also I have not found canonical source that says natural keys need to be immutable. All I find is that they need to be very estable, i.e need to be changed only in very rare ocassions, if ever. I mention PPDM because these people must know a thing or two about RDBMS design too. The origins of the "all-surrogates" approach seems to come from recommendations from some ORM frameworks. It's true that the approach allows for rapid database modeling by not having to do much business analysis, but at the expense of maintainability and readability of the SQL code. Much prevision is made for something that may or may not happen in the future ( the natural PK changed so we will have to use the RDBMS cascade update funtionality ) at the expense of day-to-day task like having to join more tables in every query and having to write code for importing data between databases, an otherwise very strightfoward procedure (due to the need to avoid PK colisions and having to create stage/equivalence tables beforehand ). Other argument is that indexes based on integers are faster, but that has to be supported with benchmarks. Obviously, long, varying varchars are not good for PK. But indexes based on short, fix-length varchar are almost as fast as integers. The questions - Is there any canonical source that supports the "all-PK-must-be-surrogates" approach ? - Has Codd's relational model been superceded by a newer relational model ?

    Read the article

  • script engine with no global environment (java)

    - by user1886930
    I am curious about how global variables are handled by script engines. I am looking for a script engine that does not preserve the state of global variables upon invocation. Are there such engines out there? We are looking for a scripting language we can use under the script engine API for Java. When making multiple invocations of a script engine, top-level calls to eval() or evaluate() method preserves the state of global variables, meaning that consequent calls to eval() will use the global variables as they were left by the last invocation. Is there a script engine that does not preserve the state, or provides the ability to reset the state, so that global variables are at their initial state every time the script engine is invoked?

    Read the article

  • Handle complexity in large software projects

    - by Oliver Vogel
    I am a lead developer in a larger software projects. From time to time its getting hard to handle the complexity within this project. E. g. Have the whole big picture in mind all the time Keeping track of the teammates work results Doing Code Reviews Supply management with information etc. Are there best practices/ time management techniques to handle these tasks? Are there any tools to support you having an overview?

    Read the article

  • What training book should I choose after Microsoft's Application Development Foundation (70-536)?

    - by codys-hole
    I've just finished 70-536 ("Microsoft .NET Framework - Application Development Foundation") Microsoft training book from Microsoft Press. I found it quite good. I have also done the 70-528 ("Microsoft .NET Framework 2.0 - Web-based Client Development") book. What book should I be reading next? I am job hunting, so I want to be marketable for a position as a software developer. What will make me stand out from the crowd and get the job?

    Read the article

  • Does anyone do hardware benchmarks on compiling code?

    - by Colen
    I've seen a bunch of sites that benchmark new hardware on gaming performance, zipping some files, encoding a movie, or whatever. Are there any that test the impact of new hardware (like SSDs, new CPUs, RAM speeds, or whatever) on compile and link speeds, either linux or windows? It'd be really good to find out what mattered the most for compile speed and be able to focus on that, instead of just extrapolating from other benchmarks.

    Read the article

  • How to find out what Hackathons are coming up near me? [closed]

    - by codeulike
    I just went to my first hackathon (London GreenHackathon) and I want to go to lots more. Specifically, hackathons that have a theme of helping people or making the world a better place in some way. I know about Random Hacks of Kindness and a couple of others but is there some central aggregator that will help me find out about hackathons? Also, if anyone else can suggest specific hackathon initiatives I should check out that would be handy too. I'm in London but obviously for other people reading this question other locations will be of interest.

    Read the article

  • Strategy for versioning on a public repo

    - by biril
    Suppose I'm developing a (javascript) library which is hosted on a public repo (e.g. github). My aim in terms of how version numbers are assigned and incremented is to follow the guidelines of semantic versioning. Now, there's a number of files in my project which compose the actual lib and a number of files that 'support it', the latter being docs, a test suite, etc. My perspective this far has been that version numbers should only apply to the actual lib - not the project as a whole - since the lib alone is 'the unit' that defines the public API. However I'm not satisfied with this approach as, for example, a fix in the test suite constitutes an 'improvement' in my project, which will not be reflected in the version number (or the docs which contain a reference to it). On a more practical level, various tools, such as package managers, may (understandably) not play along with this strategy. For example, when trying to publish a change which is not reflected in the version number, npm publish fails with the suggestion "Bump the 'version' field set the --force flag, or npm unpublish". Am I doing it wrong?

    Read the article

  • When designing a job queue, what should determine the scope of a job?

    - by Stuart Pegg
    We've got a job queue system that'll cheerfully process any kind of job given to it. We intend to use it to process jobs that each contain 2 tasks: Job (Pass information from one server to another) Fetch task (get the data, slowly) Send task (send the data, comparatively quickly) The difficulty we're having is that we don't know whether to break the tasks into separate jobs, or process the job in one go. Are there any best practices or useful references on this subject? Is there some obvious benefit to a method that we're missing? So far we can see these benefits for each method: Split Job lease length reflects job length: Rather than total of two Finer granularity on recovery: If we lose outgoing connectivity we can tell them all to retry The starting state of the second task is saved to job history: Helps with debugging (although similar logging could be added in single task method) Single Single job to be scheduled: Less processing overhead Data not stale on recovery: If the outgoing downtime is quite long, the pending Send jobs could be outdated

    Read the article

  • At what point does "constructive" criticism of your code become unhelpful?

    - by user15859
    I recently started as a junior developer. As well as being one of the least experienced people on the team, I'm also a woman, which comes with all sorts of its own challenges working in a male-dominated environment. I've been having problems lately because I feel like I am getting too much unwarranted pedantic criticism on my work. Let me give you an example of what happened recently. Team lead was too busy to push in some branches I made, so he didn't get to them until the weekend. I checked my mail, not really meaning to do any work, and found that my two branches had been rejected on the basis of variable names, making error messages more descriptive, and moving some values to the config file. I don't feel that rejecting my branch on this basis is useful. Lots of people were working over the weekend, and I had never said that I would be working. Effectively, some people were probably blocked because I didn't have time to make the changes and resubmit. We are working on a project that is very time-sensitive, and it seems to me that it's not helpful to outright reject code based on things that are transparent to the client. I may be wrong, but it seems like these kinds of things should be handled in patch type commits when I have time. Now, I can see that in some environments, this would be the norm. However, the criticism doesn't seem equally distributed, which is what leads to my next problem. The basis of most of these problems was due to the fact that I was in a codebase that someone else had written and was trying to be minimally invasive. I was mimicking the variable names used elsewhere in the file. When I stated this, I was bluntly told, "Don't mimic others, just do what's right." This is perhaps the least useful thing I could have been told. If the code that is already checked in is unacceptable, how am I supposed to tell what is right and what is wrong? If the basis of the confusion was coming from the underlying code, I don't think it's my responsibility to spend hours refactoring a whole file that someone else wrote (and works perfectly well), potentially introducing new bugs etc. I'm feeling really singled out and frustrated in this situation. I've gotten a lot better about following the standards that are expected, and I feel frustrated that, for example, when I refactor a piece of code to ADD error checking that was previously missing, I'm only told that I didn't make the errors verbose enough (and the branch was rejected on this basis). What if I had never added it to begin with? How did it get into the code to begin with if it was so wrong? This is why I feel so singled out: I constantly run into this existing problematic code, that I either mimic or refactor. When I mimic it, it's "wrong", and if I refactor it, I'm chided for not doing enough (and if I go all the way, introducing bugs, etc). Again, if this is such a problem, I don't understand how any code gets into the codebase, and why it becomes my responsibility when it was written by someone else, who apparently didn't have their code reviewed. Anyway, how do I deal with this? Please remember that I said at the top that I'm a woman, and I'm sure these guys don't usually have to worry about decorum when they're reviewing other guys' code, but honestly that doesn't work for me, and it's causing me to be less productive. I'm worried that if I talk to my manager about it, he'll think I can't handled the environment, etc.

    Read the article

  • What companies do what I'm interested in?

    - by Alex
    I'm a systems guy. People change their concentrations to avoid taking operating systems, while I took it during my first semester after transferring. I'm taking compilers and networks now, and I think they're awesome. And yet there are so many job postings looking for people to do work in things like web development, and so few postings looking for people to work in kernel hacking or network engineering. What sorts of companies do these things? I'm currently awaiting a contract in the mail for an internship with VMWare, so I'm not out of a job for the summer. Still, I'd like to companies do these things.

    Read the article

  • Should I use multiple column primary keys or add a new colum?

    - by Covar
    My current database design makes use of a multiple column primary key to use existing data (that would be unique anyway) instead of creating an additional column assigning each entry an arbitrary key. I know that this is allowed, but was wondering if this is a practice that I might want to use cautiously and possibly avoid (much like goto in C). So what are some of the disadvantages I might see in this approach or reasons I might want a single column key?

    Read the article

  • What is the ideal laptop for creative coding applications?

    - by Jason
    Hi, I am a creative coder using C++(cinder and OpenFrameworks) I am looking to upgrade from my MacBook, which slowed down to about 3fps this morning. My project involves particles systems and fluids reacting to audio analysis data and computer vision data in real-time. SD or HD? no biggie. I have asked many people what computer I need. Ideally, I want a MacBook Pro. But is that enough power? I've been told that I need a desktop for what I am doing though I'd rather stay portable I've been told that I should go PC linux to get the most power but I'd rather stay mac I've been told that RAM is more of bottleneck than processor speed I've been told that the Graphics Card is more important than CPU and that code optimizations such as using trees over lists, proper threading, sending tasks to the GPU make a bigger difference than the hardware!!! what's true?! what do I need? Any suggestions are greatly appreciated

    Read the article

  • What is the general definition of something that can be included or excluded?

    - by gutch
    When an application presents a user with a list of items, it's pretty common that it permits the user to filter the items. Often a 'filter' feature is implemented as a set of include or exclude rules. For example: include all emails from [email protected], and exclude those emails without attachments I've seen this include/exclude pattern often; for example Maven and Google Analytics filter things this way. But now that I'm implementing something like this myself, I don't know what to call something that could be either included or excluded. In specific terms: If I have a database table of filter rules, each of which either includes or excludes matching items, what is an appropriate name of the field that stores include or exclude? When displaying a list of filters to a user, what is a good way to label the include or exclude value? (as a bonus, can anyone recommend a good implementation of this kind of filtering for inspiration?)

    Read the article

  • How to better start learning programming - with imperative or declarative languages?

    - by user712092
    Someone is interested in learning to program. What language paradigm should I recomend him - imperative or declarative? And what programming language should he start with? I think that declarative because it is closer to math. And I would say that Prolog might be the best start because it is based on logic and programs are short. On the other hand at school we started learning from imperative languages and I am not sure whether there is a benefit to start with them instead of declarive ones. Thanks. :)

    Read the article

  • Is there a cheaper non-express non-student, non-msdn version of Visual Studio 2010 that supports plugins in the US than the $710 Professional Edition?

    - by Justin Dearing
    I've never actually purchased a copy of Visual Studio myself. SharpDevelop and Express edition have always been good enough for my personal use, and my employers always furnished me with the IDEs I needed to serve them. However, I'm thinking of actually paying for a copy for my personal laptop. I need this mainly so I can open solutions that contain web projects. So my question is: Is there an edition cheaper than the $710 Pro edition on Amazon that will do what I need: http://www.amazon.com/Microsoft-C5E-00521-Visual-Studio-Professional/dp/B0038KTO8S/ref=sr_1_2?ie=UTF8&qid=1287456230&sr=8-2 ? What I need is defined as: Open up a solution with C#, Web App, VB.NET, and Web Projects. Install addins like resharper, testdriven.net, etc, SCM plugins, etc. Some level of db project support. At least to be able to open a dbproj. I only need that for SCM hooks. SSMS and SQLCMD are good enough for actually editing databases. Ability to install F#, IronPython, IronRuby etc. Now naturally I'm a fairly intelligent resourceful person so I realize I can get Visual Studio in a questionable manner. Thats not what I'm looking to do. I want a legal copy, I don't want a student copy, or an MSDN copy. I want a real copy, I just want to make sure I get the cheapest edition that serves my needs.

    Read the article

  • At what point is asynchronous reading of disk I/O more efficient than synchronous?

    - by blesh
    Assuming there is some bit of code that reads files for multiple consumers, and the files are of any arbitrary size: At what size does it become more efficient to read the file asynchronously? Or to put it another way, how small must a file be for it to be faster just to read it synchronously? I've noticed (and perhaps I'm incorrect) that when reading very small files, it takes longer to read them asynchronously than synchronously (in particular with .NET). I'm assuming this has to do with set up time for things like I/O Completion Ports, threads, etc. Is there any rule of thumb to help out here? Or is it dependent on the system and the environment?

    Read the article

  • How do you keep SOA DRY?

    - by TaylorOtwell
    In our organization, we've shifted to a more "service oriented architecture". To give an example, let's assume we need to retrieve a "Quote" object. This quote has a shipper, a consignee, phone numbers, contacts, email addresses, and other location information. In other words, a Quote object is made up of many other objects. So, it seems like it would make sense to make a "Quote Retrieval Service". In our situation, we've accomplished this by creating a .NET solution and writing the service. The service API looks something like this (in pseudo-code): Function GetQuote(String ID) Returns Quote So, so far so good. Now, when this service is consumed, to keep things "de-coupled", we are creating essentially a duplicate of the Quote object and mapping from the QuoteService version of the Quote into the consumer's version of the Quote. In many cases, these classes will have the exact same properties. So, if the Quote service is consumed by 5 other applications, we would have 6 definitions of what a "Quote" is. One for each consumer, and one for the service. This feels wrong. I thought code was supposed to be DRY, but it seems like our method of SOA is forcing us to create tons of duplicated class definitions. What are we doing wrong, or is the code duplication just a "necessary evil" of SOA?

    Read the article

  • How to proceed when a bug in open source libraries is suspected?

    - by Suma
    We are using some open source libraries in our projects. Sometimes there are some issues found in some of them (most likely library bugs, but it may also be a wrong usage from our side, especially when sometimes documentation is not exactly 100 % complete). As the libraries are often quite complex, debugging them to pinpoint the source of the problem is sometimes quite hard. Can you help me to summarize what other options are there and how to exactly proceed with them? I have just recently hit some strange problems when using TCMalloc (Google scalable memory allocator) on Windows, so I would most welcome answers which would apply to this particular library, but more general answers are good as well. 1) Ask the maintainer/owner of the project for assistance. How can this be done? 2) Hire someone to identify and fix the issue. How to do this? How can I find someone with enough expertise in some particular library? ... any other options?

    Read the article

  • Is individual code ownership important?

    - by Jim Puls
    I'm in the midst of an argument with some coworkers over whether team ownership of the entire codebase is better than individual ownership of components of it. I'm a huge proponent of assigning every member of the team a roughly equal share of the codebase. It lets people take pride in their creation, gives the bug screeners an obvious first place to assign incoming tickets, and helps to alleviate "broken window syndrome". It also concentrates knowledge of specific functionality with one (or two) team members making bug fixes much easier. Most of all, it puts the final say on major decisions with one person who has a lot of input instead of with a committee. I'm not advocating for requiring permission if somebody else wants to change your code; maybe have the code review always be to the owner, sure. Nor am I suggesting building knowledge silos: there should be nothing exclusive about this ownership. But when suggesting this to my coworkers, I got a ton of pushback, certainly much more than I expected. So I ask the community: what are your opinions on working with a team on a large codebase? Is there something I'm missing about vigilantly maintaining collective ownership?

    Read the article

  • Declarative Transactions in Node.js

    - by James Kingsbery
    Back in the day, it was common to manage database transactions in Java by writing code that did it. Something like this: Transaction tx = session.startTransaction(); ... try { tx.commit(); } catch (SomeException e){ tx.rollback(); } at the beginning and end of every method. This had some obvious problems - it's redundant, hides the intent of what's happening, etc. So, along came annotation-driven transactions: @Transaction public SomeResultObj getResult(...){ ... } Is there any support for declarative transaction management in node.js?

    Read the article

  • Fastest way to parse big json android

    - by jem88
    I've a doubt that doesn't let me sleep! :D I'm currently working with big json files, with many levels. I parse these object using the 'default' android way, so I read the response with a ByteArrayOutputStream, get a string and create a JSONObject from the string. All fine here. Now, I've to parse the content of the json to get the objects of my interest, and I really can't find a better way that parse it manually, like this: String status = jsonObject.getString("status"); Boolean isLogged = jsonObject.getBoolean("is_logged"); ArrayList<Genre> genresList = new ArrayList<Genre>(); // Get jsonObject with genres JSONObject jObjGenres = jsonObject.getJSONObject("genres"); // Instantiate an iterator on jsonObject keys Iterator<?> keys = jObjGenres.keys(); // Iterate on keys while( keys.hasNext() ) { String key = (String) keys.next(); JSONObject jObjGenre = jObjGenres.getJSONObject(key); // Create genre object Genre genre = new Genre( jObjGenre.getInt("id_genre"), jObjGenre.getString("string"), jObjGenre.getString("icon") ); genresList.add(genre); } // Get languages list JSONObject jObjLanguages = jsonObject.getJSONObject("languages"); Iterator jLangKey = jObjLanguages.keys(); List<Language> langList = new ArrayList<Language>(); while (jLangKey.hasNext()) { // Iterate on jlangKey obj String key = (String) jLangKey.next(); JSONObject jCurrentLang = (JSONObject) jObjLanguages.get(key); Language lang = new Language( jCurrentLang.getString("id_lang"), jCurrentLang.getString("name"), jCurrentLang.getString("code"), jCurrentLang.getString("active").equals("1") ); langList.add(lang); } I think this is really ugly, frustrating, timewaster, and fragile. I've seen parser like json-smart and Gson... but seems difficult to parse a json with many levels, and get the objects! But I guess that must be a better way... Any idea? Every suggestion will be really appreciated. Thanks in advance!

    Read the article

  • How to handle key in PhP array if the key contains japanese characters [migrated]

    - by Jim Thio
    I have this array: [ID] => ????????-???????????__35.79_139.72 [Email] => [InBuildingAddress] => [Price] => [Street] => [Title] => ???????? ??????????? [Website] => [Zip] => [Rating Star] => 0 [Rating Weight] => 0 [Latitude] => 35.7865334803033 [Longitude] => 139.716800710514 [Building] => [City] => Unknown_Japan [OpeningHour] => [TimeStamp] => 0000-00-00 00:00:00 [CountViews] => 0 Then I do something like this: $output[$info['ID']]=$info; //mess up here $tes=$info['ID']['Title']; Well guess what it messes up. Basically even though the content of an array in PhP can be Japanese. Is this true? What's wrong. The error I got is: Debug Warning: /sdfdsfdf/api/test2.php line 36 - Cannot find element ????????-???????????__35.79_139.72 in variable Debug Warning: /sdfdsfdf/api/test2.php line 36 - main() [function.main]: It is not safe to rely on the system's timezone settings. You are required to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected 'Asia/Krasnoyarsk' for '7.0/no DST' instead So many question mark Why is this happening. What's really going on inside PhP? Where can I learn more of such things. Most importantly, what would be the best way to handle this situation. Should I tell PhP to internally always use UTF-8? Does PhP array inherenty cannot use non ascii id?

    Read the article

  • Organization standards for large programs

    - by Chronicide
    I'm the only software developer at the company where I work. I was hired straight out of college, and I've been working here for several years. When I started, eveeryone was managing their own data as they saw fit (lots of filing cabinets). Until recently, I've only been tasked with small standalone projects to help with simple workflows. In the beginning of the year I was asked to make a replacement for their HR software. I used SQL Server, Entity Framework, WPF, along with MVVM and Repository/Unit of work patterns. It was a huge hit. I was very happy with how it went, and it was a very solid program. As such, my employer asked me to expand this program into a corporate dashboard that tracks all of their various corporate data domains (People, Salary, Vehicles/Assets, Statistics, etc.) I use integrated authentication, and due to the initial HR build, I can map users to people in positions, so I know who is who when they open the program, and I can show each person a customized dashboard given their work functions. My concern is that I've never worked on such a large project. I'm planning, meeting with end users, developing, documenting, testing and deploying it on my own. I'm part way through the second addition, and I'm seeing that my code is getting disorganized. It's still programmed well, I'm just struggling with the organization of namespaces, classes and the database model. Are there any good guidelines to follow that will help me keep everything straight? As I have it now, I have folders for Data, Repositories/Unit of Work, Views, View Models, XAML Resources and Miscellaneous Utilities. Should I make parent folders for each data domain? Should I make separate EF models per domain instead of the one I have for the entire database? Are there any standards out there for organizing large programs that span multiple data domains? I would appreciate any suggestions.

    Read the article

  • How important is it for a programmer to know how to implement a QuickSort/MergeSort algorithm from memory?

    - by John Smith
    I was reviewing my notes and stumbled across the implementation of different sorting algorithms. As I attempted to make sense of the implementation of QuickSort and MergeSort, it occurred to me that although I do programming for a living and consider myself decent at what I do, I have neither the photographic memory nor the sheer brainpower to implement those algorithms without relying on my notes. All I remembered is that some of those algorithms are stable and some are not. Some take O(nlog(n)) or O(n^2) time to complete. Some use more memory than others... I'd feel like I don't deserve this kind of job if it weren't because my position doesn't require that I use any sorting algorithm other than those found in standard APIs. I mean, how many of you have a programming position where it actually is essential that you can remember or come up with this kind of stuff on your own?

    Read the article

  • Lucene best practice

    - by Dragos
    I am trying to understand how Lucene should be used. From what I have read, creating an IndexReader is costly, so using a Search Manager shoulg be the right choice. However, a SearchManager should be produced by a NRTManager(which, by the way, should replace the IndexWriter for every add or delete operation performed). But in order to have a NRTManager, I should first have an IndexWriter, and here comes my problem. The documentation says: an IndexWriter is thread-safe the constructor of this class takes a Directory object, so it seems creating an instace should be costly(as in the case of an IndexReader) all changes are buffered and flushed periodically(so they seem to encourage using a single instance) but: the changes, although flushed will only be visible after commit or close after finished making updates(add/delete), the instance should be closed I also found this: http://stackoverflow.com/questions/5374419/forgot-to-close-the-lucene-indexwriter-after-adding-documents-to-the-index where it is said that not closing a writer might ruin everything So what am I really supposed to do? Is having a single IndexWriter instance a good idea(make only commit and never close it)? EDIT: What is more, if I use NRTManager, how can I make acommit`? Is it even possible?

    Read the article

< Previous Page | 350 351 352 353 354 355 356 357 358 359 360 361  | Next Page >