Search Results

Search found 15103 results on 605 pages for 'programmers notepad'.

Page 208/605 | < Previous Page | 204 205 206 207 208 209 210 211 212 213 214 215  | Next Page >

  • JEE frameworks, a road map to learn? and should I learn them?

    - by vibhor
    Background Information I have been into programming since past 1 years professionally, my day to day work includes writing BIRT reports, designing and validating forms using JEE (struts/spring, hibernate). I don't have a comp Sci 4 year degree (Electronics), so I have very Limited experience in comp Sci. Question JEE frameworks (struts1/2, spring, hibernate etc) are hot nowadays, however java world have a tendency of building A4j, B4J... mayway4J kind of stuff (and I am tired of it). AFAIK, frameworks are nothing but bunch of XML config files and hundreds of classes built to cram (by developer). And sooner then later a new framework come into picture that says I am the best among all. So My Question is - 1.What will you do to learn a framework (many frameworks) considering that it can be obsolete till you'll be master in it (Learning frameworks can take significant amount of time)? 2.Considering early into your career, will you give a damn that how well someone knows framework (knowing frame work is important but still..) and why/how should I learn a framework knowing I have to (un)learn it in order to learn other one (plenty of of 4Js....)? I am just trying to get a big picture, that, if you're in place of me, what would be your learning/cramming strategy (Road map)? I am not intended to start a holy war between A versus B, (frameworks are more or less essential).

    Read the article

  • Data indexing frameworks fit for large E-Commerce applications

    - by Dabu
    we wrote and still maintain a large E-Commerce application. Our feature list resembles what you would expect from most shops. We'd like to improve some of our features, and now the search/suggestion list functionality (enter some letters, a JScripted suggestion list appears) has caught our eye. Currently, we use http://xapian.org/. It has some drawbacks. Firstly, it's not actually the right solution. It has been created to index documents, not ever-changing data in a granularity that an E-Commerce application would need. Secondly, the load on the database is significant when we reindex all data every night. We'd like a framework that has been designed for indexing database data, which can add to the index easily and without much load, which can supply data changes in the backoffice quickly to the frontend without much load and delay. I'm aware of the fact that Xapian is Open Source and even Free Software, so we could adapt it to our needs if we decided to invest the time and manpower. But taking a quick look around for a solution more suited seems fair, right? Oh, and commercial applications are fine, too. FOSS is not required. Thanks a bunch.

    Read the article

  • Learning the GO programming language and its prospects [closed]

    - by SHOUBHIK BOSE
    Possible Duplicate: What are the chances of Google's Go becoming a mainstream language? Recently I've started experimenting with The GO programming language by Google. Its a programmer-friendly language having the simplicity of Python. I was wondering whether companies other than Google would also start using Go for development, and if they do , what would be the prospects of being a Go programmer?

    Read the article

  • Artificial Intelligence implemented in x86 Assembly? [closed]

    - by Bigyellow Bastion
    Okay, so I decided that for my upcoming operating system, I do basically everything in x86 Assembly, using only 16-bit mode. I will need to write the software to host on it once I have something up and going, and I'll definitely post the source and VM-executable file. But as for now I'm stuck with the idea of implementing the AI code for some of the games I'm making to host on it. AI in Assembly is tedious, and sometimes almost impossible seeming, especially complex AI(I'm talking SNES Super Mario World 2: Yoshi's Island AI here, by the way, not pong AI). I was thinking that it'd be such a hassle that I'd have to bring a higher-level language to work some of this out here, like maybe C++ or C#, but I'd have to go through more work linking it into a fine binary that my OS will host, and that adds unnecessary work to the table I wanted to avoid(I don't want a complex system, I want everything as bare-bones as possible, avoiding libraries, APIs, and linkable formats for now, to make everything more directly accessible to the kernel's API).

    Read the article

  • Can SAML use saleforce login information to log into another system inside a view?

    - by steve
    I want my sales people, whom use salesforce every day, to be able to view orders in a ecommerce system through a dashboard view in salesforce. The ecom is built and sitting on my web server but the sales reps dont like to log into too many things in one day so they are not using what I built them. I read recently that salesforce can use SAML but it was unclear as to what you can do with it. What I'd like, is to make a new dash board view that will open up the ecom inside of salesforce. The ecom uses a login system but if it is inside of saleforce would SAML automatically log into the ecom?

    Read the article

  • Preffered lambda syntax?

    - by Roger Alsing
    I'm playing around a bit with my own C like DSL grammar and would like some oppinions. I've reserved the use of "(...)" for invocations. eg: foo(1,2); My grammar supports "trailing closures" , pretty much like Ruby's blocks that can be passed as the last argument of an invocation. Currently my grammar support trailing closures like this: foo(1,2) { //parameterless closure passed as the last argument to foo } or foo(1,2) [x] { //closure with one argument (x) passed as the last argument to foo print (x); } The reason why I use [args] instead of (args) is that (args) is ambigious: foo(1,2) (x) { } There is no way in this case to tell if foo expects 3 arguments (int,int,closure(x)) or if foo expects 2 arguments and returns a closure with one argument(int,int) - closure(x) So thats pretty much the reason why I use [] as for now. I could change this to something like: foo(1,2) : (x) { } or foo(1,2) (x) -> { } So the actual question is, what do you think looks best? [...] is somewhat wrist unfriendly. let x = [a,b] { } Ideas?

    Read the article

  • In the future, when mobile devices are embedded in your body, what kind of APIs might be availbe to an application developer?

    - by Conor
    Mobile devices have APIs that allow an application to send and receive SMS, make a phone call, determine location etc. In the future, when mobile devices are embedded in your body, what kind of APIs might be availbe to an application developer? EDIT: This is not intended to be a joke question (but what's the harm in some funny answers?). It's to spur a discussion on how one aspect of mobile device application could pan out and what kind of application might be available. For example: health monitoring - various APIs available to get body temperature, sugar levels, etc for transmission to your GP.

    Read the article

  • String patterns that can be used to filter and group files

    - by Louis Rhys
    One of our application filters files in certain directory, extract some data from it and export a document from the extracted data. The algorithm for extracting the data depends on the file, and so far we use regex to select the algorithm to be used, for example .*\.txt will be processed by algorithm A, foo[0-5]\.xml will be processed by algo B, etc. However now we need some files to be processed together. For example, in one case we need two files, foo.*\.xml and bar.*\.xml. Part of the information to be extracted exist in the foo file, and the other part in the bar file. Moreover, we need to make sure the wild card is compatible. For example, if there are 6 files foo1.xml foo23.xml bar1.xml bar9.xml bar23.xml foo4.xml I would expect foo1 and bar1 to be identified as a group, and foo23 and bar23 as another group. bar9 and foo4 has no pair, so they will not be treated. Now, since the filter is configured by user, we need to have a pattern that can express the above requirement. I don't think you can express meaning like above in standard regex. (foo|bar).*\.xml will match all 6 file above and we can't identify which file is paired for a particular file. Is there any standard pattern that can express it? Or any idea how to modify regex to support this, that can be implemented easily?

    Read the article

  • Open source framework quality [closed]

    - by Jonas Byström
    It's not hard to find snippets, components or tools/toolkits in the open source world which holds the quality bar really high. Myself I use git, python, linux, gcc, bash and a whole range of others on a daily basis, and I love them. But when it comes to bigger frameworks, which are intended for facilitating larger tasks of an application without much interference, I'm not as enthusiastic. I've tried a few commercial frameworks (game engines), which were okay, but all big open source frameworks which I've used myself, or which I have seen used in applications were decidedly worse than the commercial equivalent. But I'm not sure if my experience was typical. Where have bigger open source frameworks for facilitating larger tasks of an application been able to equal or exceed commercial frameworks, and how were they better?

    Read the article

  • Learning about security and finding exploits

    - by Jayraj
    First things first: I have absolutely no interest in learning how to crack systems for personal enrichment, hurting other people or doing anything remotely malicious. I understand the basis of many exploits (XSS, SQL injection, use after free etc.), though I've never performed any myself. I even have some idea about how to guard web applications from common exploits (like the aforementioned XSS and SQL injection) Reading this question about the Internet Explorer zero-day vulnerability from the Security SE piqued my curiosity and made me wonder: how did someone even find out about this exploit? What tools did they use? How did they know what to look for? I'm wary about visiting hacker dens online for fear of getting my own system infected (the Defcon stories make me paranoid). So what's a good, safe place to start learning?

    Read the article

  • Preventing RSI (Repetitive Strain Injuries)

    - by nightcracker
    I am 16 years old and I love to program and playing the piano. It's not uncommon that I'm bashing away on my mouse and keyboard all day long. I do not feel any pains doing so. Yet I am still worried, because I often hear from people that they can never type for longer then 10 minutes again without getting severe pains. Given my two hobbies, programming and playing the piano that worries me a lot. My current situation is this: G15 keyboard and G5 mouse A chair that looks like this (the back of the chair is surprisingly supportive): http://www.ikea.com/nl/nl/images/products/torbjorn-bureaustoel__0084333_PE210956_S4.JPG In my "normal sitting position" the table is around the height of my bellybutton. A LG Flatron L194wt screen (too small IMO, getting a new one soon) Should I be worrying about RSI/similar health issues? If yes, what can/should I do about it?

    Read the article

  • How common is prototyping as the first stage of development?

    - by EpsilonVector
    I've been taking some software design courses in the past few semesters, and while I see the benefit in a lot of the formalism, I still feel like it doesn't tell me anything about the program itself. You can't tell how the program is going to operate from the Use Case spec, even though it discusses what the program can do, and you can't tell anything about the user experience from the requirements document, even though it can include QA requirements. ...sequence diagrams are as good a description of how the software works as the call stack, in other words- very limited, highly partial view of the overall system, and a class diagram is great for describing how the system is built, but is utterly useless in helping you figure out what the software needs to be. Where in all this formalism is the bottom line- how the program looks, operates, and what experience it gives? Doesn't it make more sense to design off of that? Isn't it better to figure out how the program should work via a prototype and strive to implement it for real? I know that I'm probably suffering from being taught engineering by theoreticians, but I got to ask, do they do this in the industry? How do people figure out what the program actually is, not what it should conform to? Do people prototype a lot? ...or do they mostly use the formal tools like UML and I just didn't get the hang of using them yet?

    Read the article

  • How do you update copyright notices?

    - by James
    So now it's 2011, and as I carry on coding on our active projects it's time to update some copyright notices. eg. Copyright Widgets Ltd 2010 to Copyright Widgets Ltd 2010, 2011 My question is when do you update the copyright notices? Do you change the notice in the head of a file the first time you work on that file? Since a module is one piece of code consisting of many files that work together, do you update all notices in that module when you change a single file in that module? Since a program is one piece of code (maybe consisting of many modules), do you update all notices in that program when you change a single file in that program? Or do you just go through and change en-mass over your morning coffee on the grounds your about to start programming and updateing things?

    Read the article

  • History of open source software

    - by Victor Sorokin
    I've been always interested, out of the pure self-amusement, in the history of open software used today: who were the people which started it and what were the reasons to start what were design decisions at the start how software evolved over the time Specifically, I'm interested in following software: GCC X Linux kernel Java Of course, there is plenty of information in Internet to google for, but I thought it would be nice to have list of interesting resources at this site. I hope some of visitors of this site have similar interest and can share a link or two they found particularly amusing/interesting. To make this entry more question-like, here's straight question: what are the most interesting/amusing links about history of open source software?

    Read the article

  • How would you practice concurrency and multi-threading?

    - by Xavier Nodet
    I've been reading about concurrency, multi-threading, and how "the free lunch is over". But I've not yet had the possibility to use MT in my job. I'm thus looking for suggestions about what I could do to get some practice of CPU heavy MT through exercises or participation in some open-source projects. Thanks. Edit: I'm more interested in open-source projects that use MT for CPU-bound tasks, or simply algorithms that are interesting to implement using MT, rather than books or papers about the tools like threads, mutexes and locks...

    Read the article

  • How is architectural design done in an agile environment?

    - by B?????
    I have read Principles for the Agile Architect, where they defined next principles : Principle #1 The teams that code the system design the system. Principle #2 Build the simplest architecture that can possibly work. Principle #3 When in doubt, code it out. Principle #4 They build it, they test it. Principle #5 The bigger the system, the longer the runway. Principle #6 System architecture is a role collaboration. Principle #7 There is no monopoly on innovation. The paper says that most of the architecture design is done during the coding phase, and only system design before that. That is fine. So, how is the system design done? Using UML? Or a document that defines interfaces and major blocks? Maybe something else?

    Read the article

  • So we've got a code review tool, now what can we use for software documents?

    - by Tini
    We're using Subversion as a full CM for code and also for related project documents. We have JIRA and Fisheye. When we wanted to add a peer review tool, we looked at and tested several candidates. Our weighted requirements included both code and document review, but ultimately, the integration with JIRA slanted the scores in Crucible's favor. Atlassian has slammed the door on ever supporting Word or PDF in Crucible. I've tested several workaround methods to make Crucible work for documents without success. (The Confluence/Crucible plug-in was deprecated by Atlassian, so that option is out, too.) I haven't found a plugin for Crucible that adds this functionality, so short of writing my own plug-in, Crucible for documents is unworkable. Word Track Changes doesn't provide a method for true collaboration and commenting. Adobe PDF Comment and Markup is interesting, but doesn't provide a great way to keep a permanent quality record of the conversation. We can't go cloud-based, our documents must be locally hosted on our own server only. We're only on Sharepoint 2007. Help! Anyone have a suggestion?

    Read the article

  • How do you cope with change in open source frameworks that you use for your projects?

    - by Amy
    It may be a personal quirk of mine, but I like keeping code in living projects up to date - including the libraries/frameworks that they use. Part of it is that I believe a web app is more secure if it is fully patched and up to date. Part of it is just a touch of obsessive compulsiveness on my part. Over the past seven months, we have done a major rewrite of our software. We dropped the Xaraya framework, which was slow and essentially dead as a product, and converted to Cake PHP. (We chose Cake because it gave us the chance to do a very rapid rewrite of our software, and enough of a performance boost over Xaraya to make it worth our while.) We implemented unit testing with SimpleTest, and followed all the file and database naming conventions, etc. Cake is now being updated to 2.0. And, there doesn't seem to be a viable migration path for an upgrade. The naming conventions for files have radically changed, and they dropped SimpleTest in favor of PHPUnit. This is pretty much going to force us to stay on the 1.3 branch because, unless there is some sort of conversion tool, it's not going to be possible to update Cake and then gradually improve our legacy code to reap the benefits of the new Cake framework. So, as usual, we are going to end up with an old framework in our Subversion repository and just patch it ourselves as needed. And this is what gets me every time. So many open source products don't make it easy enough to keep projects based on them up to date. When the devs start playing with a new shiny toy, a few critical patches will be done to older branches, but most of their focus is going to be on the new code base. How do you deal with radical changes in the open source projects that you use? And, if you are developing an open source product, do you keep upgrade paths in mind when you develop new versions?

    Read the article

  • OpenGL CPU vs. GPU

    - by Nitrex88
    So I've always been under the impression that doing work on the GPU is always faster than on the CPU. Because of this, in OpenGL, I usually try to do intensive tasks in shaders so they get the speed boost from the GPU. However, now I'm starting to realize that some things simply work better on the CPU and actually perform worse on the GPU (particularly when a geometry shader is involved). For example, in a recent project I did involving procedurally generated terrain, I tried passing a grid of single triangles into a geometry shader, and tesselated each of these triangles into quads with 400 vertices whose height was determined by a noise function. This worked fine, and looked great, but easily maxed out the GPU with only 25 base triangles and caused a very slow framerate. I then discovered that tesselating on the CPU instead, and setting the height (using noise function) in the vertex shader was actually faster! This prompted me to question the benefits of using the GPU as much as possible... So, I was wondering if someone could describe the general pros and cons of using the GPU vs CPU for intensive graphics tasks. I know this mainly comes down to what your trying to achieve, so if necessary, use the above scenario to discuss why the "CPU + vertex shader" was actually faster than doing everything in the geometry shader on the GPU. It's possible my hardware (newest macbook pro) isn't optomized well for the geometry shader (thus causing the slow framerate). Also, I read that the vertex shader is very good with parallelism, and would love a quick explanation of how this may have played a role in speeding up my procedural terrain. Any info/advice about CPU/GPU/shaders would be awesome!

    Read the article

  • Java Desktop Application For Network users

    - by Motasem Abu Aker
    I'm developing a desktop application using Java. My application will run in a network environment where multiple users will access the same database through the application. There will be basic CRUD opreations (Insert, Update, Delete, & select), which means there will be chances of deadlock, or two users trying to update the same record at same time. I'm using the following Java Swing for Clients (MVC). MySQL Server for database (InnODB). Java Web start. Now, MySQL is centralized on the network, and all of the clients connect to it. The Application for ERP Purpose. I searched the internet to find a very good solution to ensure data integrity & to make sure that when updating one record from one client, other clients are aware of it. I read about Socket-server-client & RESTful web services. I don't want to go web application & don't want to use any extra libraries. So how can I handle this scenario: If User A updates a record: Is there a way to update User B's screen with the new value? If user A starts updating a record, how can I prevent other users from attempting to update the same record?

    Read the article

  • How does it matter if a character is 8 bit or 16 bit or 32 bit

    - by vin
    Well, I am reading Programing Windows with MFC, and I came across Unicode and ASCII code characters. I understood the point of using Unicode over ASCII, but what I do not get is how and why is it important to use 8bit/16bit/32bit character? What good does it do to the system? How does the processing of the operating system differ for different bits of character. My question here is, what does it mean to a character when it is a x-bit character?

    Read the article

  • Language parsing to find important words

    - by Matt Huggins
    I'm looking for some input and theory on how to approach a lexical topic. Let's say I have a collection of strings, which may just be one sentence or potentially multiple sentences. I'd like to parse these strings to and rip out the most important words, perhaps with a score that denotes how likely the word is to be important. Let's look at a few examples of what I mean. Example #1: "I really want a Keurig, but I can't afford one!" This is a very basic example, just one sentence. As a human, I can easily see that "Keurig" is the most important word here. Also, "afford" is relatively important, though it's clearly not the primary point of the sentence. The word "I" appears twice, but it is not important at all since it doesn't really tell us any information. I might expect to see a hash of word/scores something like this: "Keurig" => 0.9 "afford" => 0.4 "want" => 0.2 "really" => 0.1 etc... Example #2: "Just had one of the best swimming practices of my life. Hopefully I can maintain my times come the competition. If only I had remembered to take of my non-waterproof watch." This example has multiple sentences, so there will be more important words throughout. Without repeating the point exercise from example #1, I would probably expect to see two or three really important words come out of this: "swimming" (or "swimming practice"), "competition", & "watch" (or "waterproof watch" or "non-waterproof watch" depending on how the hyphen is handled). Given a couple examples like this, how would you go about doing something similar? Are there any existing (open source) libraries or algorithms in programming that already do this?

    Read the article

  • Why does XCode convert PNGs to CgBI format?

    - by Gdeglin
    According to the research done here http://imageoptim.com/tweetbot.html, Xcode's conversion of PNGs to the proprietary Apple CgBI format does not create a noticeable performance improvement. Their claim is that the conversion only reduces PNG loading speed by 1 nanosecond. If this is true, why does apple bother with the CgBI format at all? Has anyone else benchmarked loading CgBI images vs regular PNG images on iOS devices to see if they perform differently?

    Read the article

  • How To Deal With Terrible Design Decisions

    - by splatto
    I'm a consultant at one company. There is another consultant who is a year older than me and has been here 3 months longer than I have, and a full time developer. The full-time developer is great. My concern is that I see the consultant making absolutely terrible design decisions. For example, M:M relationships are being stored in the database as a comma-delimited string rather than using a conjunction table to hold the relationships. For example, consider two tables, Car and Property: Car records: Camry Volvo Mercedes Property records: Spare Tire Satellite Radio Ipod Support Standard Rather than making a table CarProperties to represent this, he has made a "Property" attribute on the Car table whose data looks like "1,3,7,13,19,25," I hate how this decision and others are affecting the quality of my code. We have butted heads over this design three times in the past two months since I've been here. He asked me why my suggestion was better, and I responded that our database would be eliminating redundant data by converting to a higher normal form. I explained that this design flaw in particular is discussed and discouraged in entry level college programs, and he responded with a shot at me saying that these comma-separated-value database properties are taught when you do your masters (which neither of us have). Needless to say, he became very upset and demanded I apologize for criticizing his work, which I did in the interest of not wanting to be the consultant to create office drama. Our project manager is focused on delivering a product ASAP and is a very strong personality - Suggesting to him at this point that we spend some time to do this right will set him off. There is a strong likelihood that both of our contracts will be extended to work on a second project coming up. How will I be able to exert dominant influence over the design of the system and the data model to ensure that such terrible mistakes are not repeated in the next project? A glimpse at the dynamics: I can be a strong personality if I don't measure myself. The other consultant is not a strong personality, is a poor communicator, is quite stubborn and thinks he is better than everyone else. The project manager is an extremely strong personality who is focused on releasing tomorrow's product yesterday. The full-time developer is very laid back and easy going, a very effective communicator, but is someone who will accept bad design if it means not rocking the boat. Code reviews or anything else that takes "time" will be out of the question - there is no way our PM will be sold on such a thing by anybody.

    Read the article

  • How to show or direct a business analyst to a data modelling subject?

    - by AaronLS
    Our business analysts pushed hard to collect data through a spreadsheet. I am the programmer responsible for importing that data. Usually when they push hard for something like this, I never know how well it will work out until a few weeks later when I have time assigned to work on the task of programming the import of the data. I have tried to do as much as possible along the way, named ranges, data validations, etc. But I usually don't have time to take a detailed look at all the data and compare to the destination in the database to determine how well it matches up. A lot of times there will be maybe a little table of items that somehow I have to relate to something else in the database, but there are not natural or business keys present that would allow me to do so. Make the best of this, trying to write something that can compare strings and make a best guess at it and then go through the effort of creating interfaces for a user to match the imported data to the destination. I feel like if the business analyst was actually creating a data model, they would be forced to think about these relationships, and have an appreciation for the need of natural or business keys to be part of the spreadsheet for the purposes of smoothly importing the data. The closest they come to business analysis is a big flat list of fields, and that would be fine if it were like any other data dictionary and include data types+relationships, but it isn't. They are just a bunch of names. No indication of what type of data they might hold, and it is up to me to guess. When I have pushed for more detail, they say that it is just busy work. How can I explain the importance of data modelling? How can I tell them what it is and how to do it? It feels impossible, because they don't have an appreciation for its importance. They do however, usually have an interest in helping out in whatever way they can, it's just this in particular has never gotten a motivated response.

    Read the article

< Previous Page | 204 205 206 207 208 209 210 211 212 213 214 215  | Next Page >