Search Results

Search found 17097 results on 684 pages for 'entry level'.

Page 416/684 | < Previous Page | 412 413 414 415 416 417 418 419 420 421 422 423  | Next Page >

  • In an online questionnaire, what is a best way to design a database for keeping track of users all attempts?

    - by user1990525
    We have a web app where users can take online exams. Exam admin will create a questionnaire. A questionnaire can have many Questions. Each question is a multiple choice question (MCQ). Lets say an admin creates a questionnaire with 10 questions. Users attempt those questions. Now, unlike real exams users can attempt single questionnaire multiple times. And we have to keep track of his all attempts. e.g. User_id Questionnaire_id question_id answer attempt_date attempt_no 1 1 1 a 1 June 2013 1 1 1 2 b 1 June 2013 1 1 1 1 c 2 June 2013 2 1 1 2 d 2 June 2013 2 Now it can also happen that after user has attempted same questionnare twice, admin can delete a question from same questionnaire, but users attempt history should still have reference to that so that user can see his that question in his attempt history in spite of admin deleting that question. If user now attempts this changed questionnaire he should see only 1 question. User_id Questionnaire_id question_id answer attempt_date attempt_no 1 1 1 a 3 June 2013 3 Also, after this user modified some part of question, users attempt history should show question before modification while any new attempt should show modified question. How do we manage this at the database level? My first gut feeling was that, For deletes, do not do physical delete, just make a question inactive so that history can still keep track of users attempt. For modifications, create versions for questions and each new attempt refres to latest version of each question and history keeping reference to version of question at attempt time.

    Read the article

  • Need a Quick Sure Method to Produce a Formatted Explain Plan? This will help!

    - by user702295
    Please use the following on the production machine to get formatted explain plan and sql trace using the SLOW sql (e.g. 'T_COMB_LIST.COMB_ID = 216') or any other value that takes longer: -- Open new session is SQL*Plus */ -- Make sure you are using updated PLAN_TABLE -- This can be done by dropping it and recreate it by running: -- SQL> @?/rdbms/admin/utlxplan.sql) set lines 1000 set pages 1000 spool xplan_1.txt EXPLAIN PLAN FOR <<<<Replace this line with exactly the same query you used above. Force hard parse by modifying the case of a character>>>> @?/rdbms/admin/utlxplp spool off EXIT --Open a second session is SQL*Plus ALTER SESSION SET max_dump_file_size = unlimited; ALTER SESSION SET tracefile_identifier = '10046'; ALTER SESSION SET statistics_level = ALL; ALTER SESSION SET events '10046 trace name context forever, level 12'; <<<<Replace this line with exactly the same query you used above. Force hard parse by modifying the case of a character>>>> select 'verify cursor closed' from dual; ALTER SYSTEM SET EVENTS '10046 trace name context off'; EXIT Make sure spooled file is formatted properly and that the 10046 trace has relevant explain plan in it.  Please Upload both files (10046 trace is generated in udump). Need instructions to find udump?   sqlplus "/ as sysdba" show parameters dump_dest This will show you bdump, cdump and udump locations.

    Read the article

  • Is this a decent use-case for goto in C?

    - by Robz
    I really hesitate to ask this, because I don't want to "solicit debate, arguments, polling, or extended discussion" but I'm new to C and want to gain more insight into common patterns used in the language. I recently heard some distaste for the goto command, but I've also recently found a decent use-case for it. Code like this: error = function_that_could_fail_1(); if (!error) { error = function_that_could_fail_2(); if (!error) { error = function_that_could_fail_3(); ...to the n-th tab level! } else { // deal with error, clean up, and return error code } } else { // deal with error, clean up, and return error code } If the clean-up part is all very similar, could be written a little prettier (my opinion?) like this: error = function_that_could_fail_1(); if(error) { goto cleanup; } error = function_that_could_fail_2(); if(error) { goto cleanup; } error = function_that_could_fail_3(); if(error) { goto cleanup; } ... cleanup: // deal with error if it exists, clean up // return error code Is this a common or acceptable use-case of goto in C? Is there a different/better way to do this?

    Read the article

  • Help w/ iPad 1 performance for tile-based DOM Javascript game

    - by butr0s
    I've made a 2D tile-based game with DOM/Javascript. For each level, the map data is loaded and parsed, then lots of tiles ( elements) are drawn onto a larger "map" element. The map is inside of a container that hides overflow, so I can move the map element around by positioning it absolutely. Works a treat on desktop browsers, and my iPad 2. My problem is that performance is really bad on iPad 1. The performance hit is directly related to all the tile elements in my map, because when I remove or reduce the number of tiles drawn, performance improves. Optimizing my collision detection loop has no effect. My first thought was to batch groups of tiles into containers, then hide/show them based on proximity to the player, however this still causes a huge hiccup when the player moves and a new group of tiles is displayed (offscreen). Actually removing the out-of-sight elements from the DOM, then re-adding them as necessary is no faster. Anyone know of any tips that might speed up DOM performance here? My map is 1920 x 1920 pixels, so as far as I know should be within the WebKit texture limit on iOS 5/iPad. The map is being moved with CSS3 transforms, and I've picked all the other obvious low-hanging fruit.

    Read the article

  • WordPress mod_rewrite redirect specific folders

    - by Ps Cjef
    As a new user, I'm not allowed to post more than two hyperlinks here. So I have added a space after every http (ignore them and read as full URLs). System: Debian Etch, Apache 2.2 I have a WordPress instance with multiple blogs. I would like to redirect some of the folders based on the year and month, while leaving other folders go to the actual locations. Example: I have archives for a few years, like 2010, 2011 and 2012: http ://mydomain.com/wordpress/myblog/2010/02 http ://mydomain.com/wordpress/myblog/2011/01 http ://mydomain.com/wordpress/myblog/2012/01 I would like to redirect all 2010 and 2011 posts to another blog with the same folder structure: http ://mydomain.com/wordpress/myotherblog/2010/02 http ://mydomain.com/wordpress/myotherblog/2011/01 and so on. I would like to have 2012 and beyond to go to the actual site (http ://mydomain.com/wordpress/myblog/2012/01). I tried mod_rewrite with the following, one rule at a time to test redirection for just one year (and to expand later for other years), and none of them worked! * RewriteEngine is already on since there are some default WordPress rewrites. * RewriteBase is set to http://mydomain.com/wordpress/ . * I put my rule before all the other default WordPress rules are processed. Didn't work solution #1 RedirectMatch 301 /myblog/2010/(.*) /myotherblog/2010/$1 Didn't work solution #2 RewriteRule /myblog/2010/(.*) http ://mydomain.com/myotherblog/2010/$1 [R=301] Didn't work solution #3 RedirectPermanent /myblog/2010/(.*) http ://mydomain.com/myotherblog/2010/$1 I've also tried the above rules with and without a fully qualified URL for the new location. The rewrite log, with log level set to 9, did not provide any useful information. It shows that it looks at the pattern specified against the URL (as mentioned in the rule), but finally what happens is a passthrough to http ://mydomain.com/myblog/ for all URLs or a 500 Internal Server Error. Any ideas on where I could be going wrong or any alternative solutions?

    Read the article

  • Is using SVN for development and CM a bad practice?

    - by GatorGuy
    I have a bit of experience with SVN as a pure programmer/developer. Within my company, however, we use SVN as our configuration management tool. I thought using SVN for development at the same time was OK since we could use branches and the trunk for dev, and tags for releases. To me, the tags were the CM part, and the branches/trunk were the dev part. Recently a person, who develops high level code (but outside of the "pure SW" group) mentioned that the existing philosophy (mixing SVN for dev and CM) was wrong... in his opinion. His reasoning is that he thinks the company's CM tool should always link to run-able SW (so branches would break this rule). He also mentioned that a CM tool shouldn't be a backup utility for daily or incremental commits. Finally, he doesn't like the idea of having to jump from revision 143 to 89 in order to get a working copy... and further that CM tools shouldn't allow reversion to a broken state. In general he wants to separate the CM and back-up/dev utilties that SVN offers. Honestly, I am new and the person with this perspective is one of seniority, experience, and success, so I want to field this dilemma with the stackoverflow userbase to see if his approach has merit. My question: Should SVN be purely used for development, and another tool for CM (or vice versa)? Why? If so, what tools would you suggest for this combo? Or do you think that integrating both CM and dev into SVN is the best approach? Why? Thanks.

    Read the article

  • SOA: Simplifying Cloud, Mobile, and On-premise Integration–Webcast October 24th 2013

    - by JuergenKress
    Proliferation of mobile devices, data explosion, and cloud enablement has caused a dramatic shift in IT. Organizations need to rethink their application infrastructures to accommodate increased processing speeds, heightened security and availability concerns for their applications, all while meeting lowered total cost of ownership. Traditional infrastructures may not be sufficient to accommodate the diversity and complexity of integrations in this new era. Many of today’s IT organizations rely on a Service Oriented Architecture (SOA) backbone to keep their businesses running. SOA adoption and acceptance across industries have led to platform maturity at the application layer level. However, we are at the start of an era where there is a new modus operandi for organizations to thrive and deliver continuously on competitive differentiation. This change is a result of market globalization, explosion in the number of mobile devices, unparalleled growth in voluminous data and innovation that crosses organizational boundaries. Social, mobile, cloud are terms that are revolutionizing the way organizations operate. Oracle SOA Suite is a hot-pluggable software suite to build, deploy and manage Service-Oriented Architectures (SOA).Oracle SOA transforms complex application integration into agile and reusable service-based connectivity by mediating, routing, and managing interactions between services and applications in the enterprise and in the cloud. Oracle SOA Suite's hot-pluggable architecture helps businesses lower upfront costs by allowing maximum re-use of existing IT investments and assets. Join us on this webcast to find out how you can optimize the use of Oracle SOA Suite, simplifying integration, and what does the next generation of SOA has to offer to you. Agenda: What's new in Oracle SOA Simplifying integration Application Integration and SOA Cloud integration with SOA Mobile Integration leveraging Oracle SOA Suite Oracle Delivers on Next Generation SOA Customer Examples Summary and Q&A Webcast Thursday October 24th, 2013 10am CET (8am UTC / 11am EEST)Details at the Registration Page SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: cloud integration,mobile integration,training,webcast middeware,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • The Increasing Focus on Architecture

    - by Bob Rhubart
    If you follow my updates on Twitter or on the OTN ArchBeat page on Facebook you have probably noticed that I'm a regular reader of Joe McKendrick's SOA blog on ZDNet. Usually I'm content to simply share a link on my social networks when I find one of McKendrick's posts interesting. But with a recent post, In the cloud era, let's start calling IT what it is: 'Innovation Team', McKendrick hit on a point that warrants more than a quick link: "IT is no longer just a department full of people who code, build and maintain systems. IT is the business partner that plans and strategizes what types of technology solutions the business needs to move forward." Of course, what McKendrick is describing is an increased focus on architecture. Assuming that McKendrick's assessment is correct — and I do — that expanding focus, from coding, building, and maintaining systems to planning and strategizing technology solutions that serve the business, isn't limited to the organizational level. The individual roles within the IT organization will also have to shift to a more broadly architectural mindset. McKendrick's post references Dr. Irving Wladawsky-Berger's assessment of cloud computing as a critical "third model" of computing to emerge in the 50-year history of Information Technology. As computing itself evolves, the underlying roles that make computing possible must evolve accordingly. That evolution will be defined by an increased focus on architecture.

    Read the article

  • Integrating Amazon EC2 in Java via NetBeans IDE

    - by Geertjan
    Next, having looked at Amazon Associates services and Amazon S3, let's take a look at Amazon EC2, the elastic compute cloud which provides remote computing services. I started by launching an instance of Ubuntu Server 14.04 on Amazon EC2, which looks a bit like this in the on-line AWS Management Console, though I whitened out most of the details: Now that I have at least one running instance available on Amazon EC2, it makes sense to use the services that are integrated into NetBeans IDE:  I created a new application with one class, named "AmazonEC2Demo". Then I dragged the "describeInstances" service that you see above, with the mouse, into the class. Then the IDE automatically created all the other files you see below, i.e., 4 Java classes and one properties file: In the properties file, register the access ID and secret keys. These are read by the other generated Java classes. Signing and authentication are done automatically by the code that is generated, i.e., there's nothing generic you need to do and you can immediately begin working on your domain-specific code. Finally, you're now able to rewrite the code in "AmazonEC2Demo" to connect to Amazon EC2 and obtain information about your running instance: public class AmazonEC2Demo { public static void main(String[] args) { String instanceId1 = "i-something"; RestResponse result; try { result = AmazonEC2Service.describeInstances(instanceId1); System.out.println(result.getDataAsString()); } catch (IOException ex) { Logger.getLogger(AmazonEC2Demo.class.getName()).log(Level.SEVERE, null, ex); } } } From the above, you'll receive a chunk of XML with data about the running instance, it's name, status, dates, etc. In other words, you're now ready to integrate Amazon EC2 features directly into the applications you're writing, without very much work to get started. Within about 5 minutes, you're working on your business logic, rather than on the generic code that anyone needs when integrating with Amazon EC2.

    Read the article

  • Getting into the details of game engine programming

    - by Darkslash
    I am interested in learning game programming, but I really have an interest in the lower level engineering in games. I have OpenGL experience, and I am really interested in learning more about implementing AI, Physics, etc. I have a computer science degree, so I really like getting into technical stuff. Many times when I ask about this sort of thing, I get a lot of "Use an engine", "Use Unity3d", "Why waste your time writing code that already exists", etc, etc. My idea was to use simpler libraries such as SFML or XNA so that I could learn how to implement the more complex systems. The thing is, although I do want to write games, I want to learn things that using something like Unity simply doesn't teach you. My goal is not to make a current generation quality 3D game to sell, I just want to make some cool smaller games and learn all I can about the programming side of game development. Is this something that people just do not do anymore? It seems like everywhere I turn people are using Unity or UDK or GameMaker. I fully understand why you would use a tool like these, but I cant see how they would suit my purposes. So where does someone like myself turn? Am I trying to learn something that people just do not bother doing anymore? Is the innovation in this area gone and just all about gameplay now? I'm sorry if this question seems silly, but I am genuinely interested in knowing more about this and meeting more people who are interested in this sort of thing.

    Read the article

  • Writing generic code when your target is a C compiler

    - by enobayram
    I need to write some algorithms for a PIC micro controller. AFAIK, the official tools support either assembler or a subset of C. My goal is to write the algorithms in a generic and reusable way without losing any runtime or memory performance. And if possible, I would like to do this without increasing the development time much and compromising the readability and maintainability much either. What I mean by generic and reusable is that I don't want to commit to types, array sizes, number of bits in a bit field etc. All these specifications, IMHO, point to C++ templates, but there's no compiler for it for my target. C macro metaprogramming is another option, but, again my opinion, that greatly reduces readability and increases development time. I believe what I'm looking for is a decent C++ to C translator, but I'd like to hear anything else that satisfies the above requirements. Maybe a translator from another high-level language to C that produces very efficient code, maybe something else. Please note that I have nothing against C, I just wish templates were available in it.

    Read the article

  • Join me at OpenWorld 2012

    - by Michael Palmeter (Exalogic PM)
    For those of you that will be coming out to Oracle OpenWorld 2012 next ween in San Francisco, I encourage you to take a few minutes on Monday afternoon to come to my session on Oracle Exalogic. Click here for more info: CON9416 - Oracle Exalogic 2.0: Ready-to-Deploy, Mission-Critical Private Cloud My session is one of the first on Oracle Exalogic (one of the privileges of running Product Management for the product) and with that in mind it is going to be something of an introduction and overview.  The material I will present is tailored for C-level customers that are interested in the product but haven't really been exposed to it in any detail.  This is essentially the same sort of presentation I give to customers that visit Oracle HQ, and it provides context for all of the other excellent sessions that follow. During this session I will talk about: The macro-trends in the industry that are driving Exalogic strategy - IT-as-a-Service and infrastructure convergence The first two years of market success with Exalogic - who's bought it, why, and what their results have been Exalogic key features and differentiation - why it's the best possible platform for Oracle business applications and middleware How Exalogic performs, and why it is the hands-down performance champion of Enterprise cloud platforms If you haven't signed up yet, please do.  I'd love to see you there.

    Read the article

  • Should I make my project free software?

    - by SkyDan
    The story Over the last couple month I have been working on a pretty big project. It's an enterprise-level software, I designed to be used at a local gym, but I believe it can be used in other places, where things like keeping track of clients, attendances, purchases and payments are required. The problem Well recently, I started to think on how to mature this project from being home-made. Not just because I want my project to grow but also because I would like to have some gain from it. The solutions? And here I saw 2 paths: License the software under some restricted license and try to sell the software to other business around. This way I can get some money for college (I am a high school junior right now) License the software under some free license, publish it on GitHub or something, and try to engage other developers to participate in the project. This way I get experience of working in a team and a better chance that the project will keep growing. The latter would be a good + for my resume, when I'll trying to find a job. So far both ways seem pretty exciting and beneficial to me. The first one offers a good college career, while the second one offers some additional experience and the project's growth. The questions Can anyone point to some other +/- of these 2 options? What would the better option in my situation and why? Or are there other options?

    Read the article

  • RetinaPad Enables Retina Display for iPhone Apps on the iPad

    - by Jason Fitzpatrick
    RetinaPad is an iPad application that actives the Retina Display resolution on iPhone applications to increase the clarity on the iPad. It’s a feature that should be built-in but is currently only available for jailbroken iPads. The premise is simple. Currently iPads lack support for the Retina Display level resolution that iPhone apps are capable of if displaying. RetinaPad allows you to stop using the ugly and blocky simple doubling available on the iPad and start accessing the higher resolution Retina Display mode for iPhone applications on the iPad. It’s such a trivial thing that it’s outright shameful Apple doesn’t include it by default. You should have to jailbreak your device to unlock functionality that should be there right from the factory. Check out the demo video below to see it in action: Fire up your jailbroken iPad, launch Cydia and search for RetinaPad. Retina Pad is $2.99, iPad only. How to Enable Google Chrome’s Secret Gold IconHTG Explains: What’s the Difference Between the Windows 7 HomeGroups and XP-style Networking?Internet Explorer 9 Released: Here’s What You Need To Know

    Read the article

  • Should I encrypt data in database?

    - by Tio
    I have a client, for which I'm going to do an Web application about patient care, managing patients, consults, history, calendars, everything about that basically. The problem is that this is sensitive data, patient history and such. The client insists on encrypting the data at the database level, but I think this is going to deteriorate the performance of the web app. ( But maybe I shouldn't be worried about this ) I've read the laws about data protection on health issues ( Portugal ), but isn't very specific about this ( I just questioned them about this, I'm waiting for their response ). I've read the following link, but my question is different, should I encrypt the data in the database, or not. One problem that I foresee in encrypting data, is that I'm going to need a key, this could be the user password, but we all know how user passwords are ( 12345 etc etc ), and generating a key I would have to store it somewhere, this means that the programmer, dba, whatever could have access to it, any thoughts on this? Even adding an random salt to the user password isn't going to solve the problem since I can always access it, and therefore decrypt the data.

    Read the article

  • Encapsulating code in F# (Part 2)

    - by MarkPearl
    In part one of this series I showed an example of encapsulation within a local definition. This is useful to know so that you are aware of the scope of value holders etc. but what I am more interested in is encapsulation with regards to generating useful F# code libraries in .Net, this is done by using Namespaces and Modules. Lets have a look at some C# code first… using System; namespace EncapsulationNS { public class EncapsulationCLS { public static void TestMethod() { Console.WriteLine("Hello"); } } } Pretty simple stuff… now the F# equivalent…. namespace EncapsulationNS module EncapsulationMDL = let TestFunction = System.Console.WriteLine("Hello") ()   Even easier… lets look at some specifics about F# namespaces… Namespaces are open. meaning you can have multiple source files and assemblies can contribute to the same namespace. So, Namespaces are a great way to group modules together, so the question needs to be asked, what role do modules play. For me, the F# module is in many ways similar to the vb6 days of modules. In vb6 modules were separate files and simply allowed us to group certain methods together. I find it easier to visualize F# modules this way than to compare them to the C# classes. However that being said one is not restricted to one module per file – there is flexibility to have multiple modules in one code file however with my limited F# experience I would still recommend using the file as the standard level of separating modules as it is very easy to then find your way around a solution. An important note about interop with F# and other .Net languages. I wrote a blog post a while back about a very basic F# to C# interop. If I were to reference an F# library in a C# project (for instance ‘TestFunction’), in C# it would show this method as a static method call, meaning I would not have to instantiate an instance of the module.

    Read the article

  • Is There a Cloud Over OpenWorld?

    - by Tony Berk
    If you have been to OpenWorld in the past, you know it can be overwhelming or at least a bit "large." If this is your first time at OpenWorld, get ready! You are in for a big (or I should say HUGE) treat. The first thing you'll notice when you get to San Francisco is there are a lot of people, buses with "Oracle" posters, large exhibit halls filled with demos, games and tchotchkes from vendors with hot new solutions, and then there are the sessions. Yes, in fact there are over 2000 sessions. How can you possibly sort through 2000 sessions to find the best 20 or so for you? Which are the 1% for you? We will try to help with some insight over the next few weeks.  I'm going start at the highest level. Up in the Clouds! Since I know many people are looking for an update on The Oracle Cloud. We will drill down into the cloud and other topics for CRM and Customer Experience sessions in the next set of posts. Below is a list of some of the Oracle executive keynotes during OpenWorld highlighting The Oracle Cloud and applications related topics (the full list is here). In these sessions you will get details on Oracle's strategy and how Oracle is changing the industry to help our customers be more efficient, effective and innovative. Sunday, September 30 6:00pm - 7:00pm Larry Ellison: Hardware and Software, Engineered to Work Together: Why it's a Different Approach Tuesday, October 2 8:45am - 9:45am Thomas Kurian: The Oracle Cloud: Oracle's Cloud Platform and Application's Strategy Tuesday, October 2 3:30pm - 4:30pm Larry Ellison: The Oracle Cloud: Where Social is Built in Thursday, October 4 9:45am - 10:45am Mark Hurd: See More, Act Faster: Oracle Business Analytics We encourage you to also join the keynotes on the Oracle Database and Cloud Infrastructure and the fascinating partner keynotes, as well. Check the full list on the OpenWorld site. Oh, if you haven't registered yet, what are you waiting for? OpenWorld Registration Details.

    Read the article

  • Amazon EC2 vs Dedicated server at Hetzner, what's the use for EC2?

    - by C-Blu
    After searching the web I still can't find the reason to use EC2. What's the point to scale EC2? If you expect a huge burst in traffic, they say. OK, but what if you already have a couple of sites with good traffic, and for example medium reserved EC2 instance is not enough. You are paying $36.60(medium reserved for 1year) in EU(Ireland) + traffic + optional expenses for databases and S3 if you use them. Of course as some point when you are under $56.6-$66.1 you can optimize your hosting costs with Amazon EC2. But when you get at some point if purchase EX4 server from Hetzner, it will surpass your perfomance needs for a long time, before you get a massive traffic. (I am wrong?) CPU: i7-2600 Quadcore (3.4-3.8 Ghz) RAM: 16 GB HDD: 2x3 TB SATA (6 Gbit/s) - I think that disc performance of a dedicated is better then of Amazon EBS Traffic: 10 TiB in month included. This is what you get from Hetzner for $56(- 19% VAT) or $66 for EU residents. Please, tell me what's the reason to use Amazon? Which load won't a server from Hetzner take, but Amazon Auto Scaling will? The maintenance of dedicated vs EC2 is still the same? Or hardware failure at Amazon, won't ruin your EBS storage? I'm still not at the level when I need expensive hosting, but want to know beforehand, just to be sure if Amazon infrastructure is better then pure performance of Hetzner's hardware.

    Read the article

  • is wisdom of what happens 'behind scenes' (in compiler, external DLLs etc.) important?

    - by I_Question_Things_Deeply
    I have been a computer-fanatic for almost a decade now. I've always loved and wondered how computers work, even from the purest, lowest hardware level to the very smallest pixel on the screen, and all the software around that. That seems to be my problem though ... as I try to write code (I'm pretty fluent at C++) I always sit there enormous amounts of time in front of a text-editor wondering how every line, statement, datum, function, etc. will correspond to every Assembly and machine instruction performed to do absolutely everything necessary for the kernel to allocate memory to run my compiled program, and all of the other hardware being used as well. For example ... I would write cout << "Before memory changed" << endl; and run the debugger to get the Assembly for this, and then try and reverse disassemble the Assembly to machine code based on my ISA, and then research every .dll, library file, linked library, linking process, linker source code of the program, the make file, the kernel I'm using's steps of processing this compilation, the hardware's part aside from the processor (e.g. video card, sound card, chipset, cache latency, byte-sized registers, calling convention use, DDR3 RAM and disk drive, filesystem functioning and so many other things). Am I going about programming wrong? I mean I feel I should know everything that goes on underneath English syntax on a computer program. But the problem is that the more I research every little thing the less I actually accomplish at all. I can never finish anything because of this mentality, yet I feel compelled to know everything... what should I do?

    Read the article

  • How to ask developers to think about high resolution screens

    - by WhiteWind
    I just got a ASUS N56VZ laptop, and it`s all good, but the screen resolution 1920x1080px. I am not asking, how to scale ui elements. If someone is interested I have found some tricks: increase font size in system settings increase unity dock size in MyUnity or system settings in modern Ubuntu versions. tweak userChrome.js of FireFox to make buttons|panels|icons larger add DefaultZoomLevel extension to FireFox to make it zoom pages initially. But all of it is miserable, because there are some big bugs: window decoration elements are way too small to pick them with mouse. I can't scale window easily and I can't position my cursor fast on the close|maximize buttons. Tuning lines like in sound volume dialog are hardly clickable at all. Unity top panel (status panel & tray) hardly can contain the bigger font, so it looks ugly, but icons are still the same. Sometimes I can`t read text, as it is cropped (and I cant scale some dialogs as it has fixed size) Chromium is not usable at all (ok, it's not Ubuntu problem, but the problem still exists) JAVA applications are not scalable (same as above) In FF I am able to get descent results in most cases, but multiplication of system font increase and browser font increase makes system controls (combobox, lists, drop down lists) extremely big, so I cant even control the zoom level on the page. IMHO, we should post a bug report (but what kind of bug?) and vote for it! The problem is even deeper, but at least we should ask developers to think about it. So my question is: how can I post a report (the right words and right place) and how can we (who already has that problem, or who want it to be solved before hardware upgrade) vote for faster solution. Any ideas?

    Read the article

  • SEO, IIS 7 and web.config in subfolder issue

    - by tesicg
    We have ASP.NET application that has sub-folder with .aspx pages and separate web.config file in it. The .aspx pages in that sub-folder behave as separate site. In the web.config file at application level, I set the rule that removing trailing slashes: <rewrite> <rules> <rule name="RemoveTrailingSlashRule1" stopProcessing="true"> <match url="(.*)/$" /> <conditions> <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" /> <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" /> </conditions> <action type="Redirect" redirectType="Permanent" url="{R:1}" /> </rule> </rules> </rewrite> I expect this rule will propagate downward to sub-folder as well. To access the site in sub-folder we should type: http://concert.local/elki/ and get it without trailing slash as: http://concert.local/elki But, the trailing slash remains. The web.config file in sub-folder looks as following: <configuration> <system.webServer> <defaultDocument> <files> <add value="Sections.aspx" /> </files> </defaultDocument> </system.webServer> </configuration>

    Read the article

  • what's a good approach to working with multiple databases?

    - by Riz
    I'm working on a project that has its own database call it InternalDb, but also it queries two other databases, call them ExternalDb1 and ExternalDb2. Both ExternalDb1 and ExternalDb2 are actually required by a few other projects. I'm wondering what the best approach for dealing with this is? Currently, I've just created a project for each of these external databases and then generated Edmx and entities using the entity-framework approach. My thought was that I could then include these projects in any of my solutions that require access to these databases. Also, I don't have any separate business layers. I just have a solution like below: Project.Domain ExternalDb1Project.Domain ExternalDb2Project.Domain Project.Web So my Domain projects contain the data access as well as the POCOs generated by Entity Framework and any business logic. But I'm not sure if this is a good approach. For example if I want to do Validation in my Project.Domain on the entities in the InternalDb, it's fine. But if I want to do Validation for entities from either of the ExternalDbs, then I wonder where it should go? To be more specific, I retrieve Employees from ExternalDb1Project.Domain. However, I want to make sure they are Active. Where should this Validation go? How to architect a project like this at a high level? Also, I want to make sure that I use IoC for my data contexts so I can create Fakes when writing tests. I wonder where the interfaces for these various data contexts would reside?

    Read the article

  • MVVM Light V4 preview (BL0014) release notes

    - by Laurent Bugnion
    I just pushed to Codeplex an update to the MVVM Light source code. This is an early preview containing some of the features that I want to release later under the version 4. If you find these features useful for your project, please download the source code and build the assemblies. I will appreciate greatly any issue report. This version is labeled “V4.0.0.0/BL0014”. The “BL” string is an old habit that we used in my days at Siemens Building Technologies, called a “base level”. Somehow I like this way of incrementing the “base level” independently of any other consideration (such as alpha, beta, CTP, RTM etc) and continue to use it to tag my software versions. In Microsoft parlance, you could say that this is an early CTP of MVVM Light V4. Caveat The code is unit tested, but as we all know this does not mean that there are no bugs This code has not yet been used in production. Again, your help in testing this is greatly appreciated, so please report all bugs to me! What’s new? The following features have been implemented: Misc Various “maintenance work”. All WPF assemblies (that is .NET35 and .NET4) now allow partially trusted callers. It means that you can use them in am XBAP in partial trust mode. Testing Various test updates Added Windows Phone 7 unit tests Note: For Windows Phone 7, due to an issue in the unit test framework, not all tests can be executed. I had to isolate those tests for the moment. The error was reported to Microsoft. ViewModelBase The constructor is now public to allow serialization (especially useful on the phone to tombstone the state). ViewModelBase.MessengerInstance now returns Messenger.Default unless it is set explicitly. Previously, MessengerInstance was returning null, which was complicating the code. Two new ways to raise the PropertyChanged event have been added. See below for details. Messenger Updated the IMessenger interface with all public members from the Messenger class. Previously some members were missing. A new Unregister method is now available, allowing to unregister a recipient for a given token. RelayCommand RaiseCanExecuteChanged now acts the same in Windows Presentation Foundation than in Silverlight. In previous versions, I was relying on the CommandManager to raise the CanExecuteChanged event in WPF. However, it was found to be too unreliable, and a more direct way of raising the event was found preferable. See below for details. Raising the PropertyChanged event A very much requested update is now included: the ability to raise the PropertyChanged event in a viewmodel without using “magic strings”. Personally, I don’t see strings as a major issue, thanks to two features of the MVVM Light Toolkit: In the DEBUG configuration, every time that the RaisePropertyChanged method is called, the name of the property is checked against all existing properties of the viewmodel. Should the property name be misspelled (because of a typo or refactoring), an exception is thrown, notifying the developer that something is wrong. To avoid impacting the performance, this check is only made in DEBUG configuration, but that should be enough to warn the developers in case they miss a rename. The property name is defined as a public constant in the “mvvminpc” code snippet. This allows checking the property name from another class (for example if the PropertyChanged event is handled in the view). It also allows changing the property name in one place only. However, these two safeguards didn’t satisfy some of the users, who requested another way to raise the PropertyChanged event. In V4, you can now do the following: Using lambdas private int _myProperty; public int MyProperty { get { return _myProperty; } set { if (_myProperty == value) { return; } _myProperty = value; RaisePropertyChanged(() => MyProperty); } } This raises the property changed event using a lambda expression instead of the property name. Light reflection is used to get the name. This supports Intellisense and can easily be refactored. You can also broadcast a PropertyChangedMessage using the Messenger.Default instance with: private int _myProperty; public int MyProperty { get { return _myProperty; } set { if (_myProperty == value) { return; } var oldValue = _myProperty; _myProperty = value; RaisePropertyChanged(() => MyProperty, oldValue, value, true); } } Using no arguments When the RaisePropertyChanged method is called within a setter, you can also omit the property name altogether. This will fail if executed outside of the setter however. Also, to avoid confusion, there is no way to broadcast the PropertyChangedMessage using this syntax. private int _myProperty; public int MyProperty { get { return _myProperty; } set { if (_myProperty == value) { return; } _myProperty = value; RaisePropertyChanged(); } } The old way Of course the “old” way is still supported, without broadcast: public const string MyPropertyName = "MyProperty"; private int _myProperty; public int MyProperty { get { return _myProperty; } set { if (_myProperty == value) { return; } _myProperty = value; RaisePropertyChanged(MyPropertyName); } } And with broadcast: public const string MyPropertyName = "MyProperty"; private int _myProperty; public int MyProperty { get { return _myProperty; } set { if (_myProperty == value) { return; } var oldValue = _myProperty; _myProperty = value; RaisePropertyChanged(MyPropertyName, oldValue, value, true); } } Performance considerations It is notorious that using reflection takes more time than using a string constant to get the property name. However, after measuring for all platforms, I found the differences to be very small. I will measure more and submit the results to the community for evaluation, because some of the results are actually surprising (for example, using the Messenger to broadcast a PropertyChangedMessage does not significantly increase the time taken to raise the PropertyChanged event and update the bindings). For now, I submit this code to you, and would be delighted to hear about your own results. Raising the CanExecuteChanged event manually In WPF, until now, the CanExecuteChanged event for a RelayCommand was raised automatically. Or rather, it was attempted to be raised, using a feature that is only available in WPF called the CommandManager. This class monitors the UI and when something occurs, it queries the state of the CanExecute delegate for all the commands. However, this proved unreliable for the purpose of MVVM: Since very often the value of the CanExecute delegate changes according to non-UI events (for example something changing in the viewmodel or in the model), raising the CanExecuteChanged event manually is necessary. In Silverlight, the CommandManager does not exist, so we had to raise the event manually from the start. This proved more reliable, and I now changed the WPF implementation of the RaiseCanExecuteChanged method to be the exact same in WPF than in Silverlight. For instance, if a command must be enabled when a string property is set to a value other than null or empty string, you can do: public MainViewModel() { MyTestCommand = new RelayCommand( () => DoSomething(), () => !string.IsNullOrEmpty(MyProperty)); } public const string MyPropertyName = "MyProperty"; private string _myProperty = string.Empty; public string MyProperty { get { return _myProperty; } set { if (_myProperty == value) { return; } _myProperty = value; RaisePropertyChanged(MyPropertyName); MyTestCommand.RaiseCanExecuteChanged(); } } Logo update I made a minor change to the logo: Some people found the lack of the word “light” (as in MVVM Light Toolkit) confusing. I thought it was cool, because the feather suggests the idea of lightness, however I can see the point. So I added the word “light” to the logo. Things should be quite clear now. What’s next? This is only the first of a series of releases that will bring MVVM Light to V4. In the next weeks, I will continue to add some very requested features and correct some issues in the code. I will probably continue this fashion of releasing the changes to the public as source code through Codeplex. I would be very interested to hear what you think of that, and to get feedback about the changes. Cheers, Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • Java and .NET cost of use [on hold]

    - by 1110
    I work with .NET technology stack for about 4 years. I am learning and enjoy working with ASP MVC framework and I never did anything serious in other languages. This is not the question like what is better (I read all similar questions). What interest me is the cost of switching. For example: If you are about to start a start-up company today and you are in my situation not too much money, some good idea that you think others will use and have a knowledge of .NET. In my head I have a few questions that I can't answer and I know that somebody with experience can: 1) Java & .NET hosting. Suppose shared hosting is not good enough anymore, your site has grown and you need more resources. How much Java services is cheaper compared to .NET? 2) I didn't follow hype about ORACLE will kill java long time. Does oracle show interest in investing in java. I mean is is safe to bet on java as a technology when starting start-up (basically did oracle show some will to destroy java platform)? 3) I am not sure what I am asking here. When you use Java you can use JEEE stack or Java with third party stack (spring, hibernate, maven etc.). I saw a lot of project that work with second option if web application is not enterprise level but social networking site for example which stack is best pick? Summary of this question is is it safe to jump in to Java learn it and build product based on it. It's not too hard for me to learn it. But how much can I get from it.

    Read the article

  • ECS with Go - circular imports [migrated]

    - by Andreas
    I'm exploring both Go and Entity-Component-Systems. I understand how ECS works, and I'm trying to replicate what seems to be the go-to document of ECS, namely http://cowboyprogramming.com/2007/01/05/evolve-your-heirachy/ For performance, the document recommends to use static arrays of every component type. That is, not arrays of component interfaces (arrays of pointers). The problem with this in Go is circular imports. I have one package, ecs, which contains the definitions for Entity, Component and System types/interfaces as well as an EntityManager. Another package, ecs/components, contains the various components. Obviously, the ecs/components package depends on ecs. But, to declare arrays of specific components in EntityManager, ecs would depend on ecs/components, therefore creating a circular import. Is there any way of avoiding this? I am aware that normally a high level system should not depend on lower systems. I'm also want to point out that using an array of pointers is probably fast enough for my purposes, but I'm interested in possible workarounds (for future reference) Thank you for your help!

    Read the article

< Previous Page | 412 413 414 415 416 417 418 419 420 421 422 423  | Next Page >