Search Results

Search found 69140 results on 2766 pages for 'design time'.

Page 402/2766 | < Previous Page | 398 399 400 401 402 403 404 405 406 407 408 409  | Next Page >

  • Dockable panes created in CChildFrame not visible the second time the app. starts.

    - by Nijenhuis
    Hi, I have created some dockable panes in CChildFrame::OnCreate() The first time i start the application they are shown. The second time i start the application they are created but the splitterwindows are completly against the sides of the clients area (bottom and right), so not visible. So i have to use the mouse to pull the splitters into the clientarea so that the dockable windows become visible again. If i do File-New in my app a new client window is created and showing the dockable windows as they should be. I Think this has something to do with saving the windows layout in the registry, because if i change SetRegistryKey(_T("61sakjgsajkdg")); in the CWinApp derived class of my app. and rerun they are shown again the first time. (but not the second time i restart the app). How can i save the layout of those dockable windows as well, so if i restart my app. they are visible ? Or else how do i prevent my app. of overwritting the window layout with the one previously saved. Something to do with LoadCustomState() and SaveCustomState() ?, i could no find any info on howto implement those methods. I have here a link to the demo project to demonstrate what i mean: http://www.4shared.com/file/237193472/c384f0f6/GUI60.html Could someone tell me how to show those dockable windows in my CChildFrame class the second time the app starts?

    Read the article

  • Safely convert UTC datetimes to local time (based on TZ) for calculations?

    - by James
    Following from my last question which @Jon Skeet gave me a lot of help with (thanks again!) I am now wondering how I can safely work with date/times, stored as UTC, when they are converted back to Local Date/Time. As Jon indicated in my last question using DateTimeOffset represents an instant in time and there is no way to predict what the local time would be say a minute later. I need to be able to do calculations based on these date/times. So how can I assure when I pull the dates from the database, convert them to local date/time and do specific calculations on them they are going to be accurate?

    Read the article

  • iTunes App Store: Does a major version upgrade = longer approval queue time?

    - by erlingormar
    I'm wondering if anyone has insight into this... when releasing an update of an iPhone application, should I expect the approval process to take longer if I submit something that's declared as a major version update (as compared to a minor version)? Last time around (about the time the big Facebook-update was released) our wait time for a minor version review was 21 days (16 working days).

    Read the article

  • Why is my ruby application running faster the second time?

    - by Omega
    I'm creating a Ruby game using the Gosu framework. All good. Sometimes, when I run the game, it has some kind of slow startup, and probably it will be rather slow during the whole game. So I close it and... open it again. It is very likely that it will startup quickly and the whole game will run smoothly and fast. Why is that? What is this phenomenon? Is it faster because of some cache stored or whatever since the first run? (But why would cache be stored? If the app dies, I would expect no references at all etc...) Ruby, Windows 7.

    Read the article

  • Loose Coupling and UX Patterns for Applications Integrations

    - by ultan o'broin
    I love that software architecture phrase loose coupling. There’s even a whole book about it. And, if you’re involved in enterprise methodology you’ll know just know important loose coupling is to the smart development of applications integrations too. Whether you are integrating offerings from the Oracle partner ecosystem with Fusion apps or applications coexistence scenarios, loose coupling enables the development of scalable, reliable, flexible solutions, with no second-guessing of technology. Another great book Enterprise Integration Patterns: Designing, Building, and Deploying Messaging Solutions tells us about loose coupling benefits of reducing the assumptions that integration parties (components, applications, services, programs, users) make about each other when they exchange information. Eliminating assumptions applies to UI development too. The days of assuming it’s enough to hard code a UI with linking libraries called code on a desktop PC for an office worker are over. The book predates PaaS development and SaaS deployments, and was written when web services and APIs were emerging. Yet it calls out how using middleware as an assumptions-dissolving technology “glue" is central to applications integration. Realizing integration design through a set of middleware messaging patterns (messaging in the sense of asynchronously communicating data) that enable developers to meet the typical business requirements of enterprises requiring integrated functionality is very Fusion-like. User experience developers can benefit from the loose coupling approach too. User expectations and work styles change all the time, and development is now about integrating SaaS through PaaS. Cloud computing offers a virtual pivot where a single source of truth (customer or employee data, for example) can be experienced through different UIs (desktop, simplified, or mobile), each optimized for the context of the user’s world of work and task completion. Smart enterprise applications developers, partners, and customers use design patterns for user experience integration benefits too. The Oracle Applications UX design patterns (and supporting guidelines) enable loose coupling of the optimized UI requirements from code. Developers can get on with the job of creating integrations through web services, APIs and SOA without having to figure out design problems about how UIs should work. Adding the already user proven UX design patterns (and supporting guidelines to your toolkit means ADF and other developers can easily offer much more than just functionality and be super productive too. Great looking application integration touchpoints can be built with our design patterns and guidelines too for a seamless applications UX. One of Oracle’s partners, Innowave Technologies used loose coupling architecture and our UX design patterns to create an integration for a customer that was scalable, cost effective, fast to develop and kept users productive while paving a roadmap for customers to keep pace with the latest UX designs over time. Innowave CEO Basheer Khan, a Fusion User Experience Advocate explains how to do it on the Usable Apps blog.

    Read the article

  • What's the best practice or design pattern for user registration ?

    - by Space Cracker
    We have a big portal that needs user registration to allow them use its services. It's already done in .NET and SOL Server 2005. we are in the phase now of discovering all the problems of the current registration system to build a new robust flexible one that can be extended easily and can be more usable for all services. Could any help me find best practices and design patterns to help me rebuild this using good architectural practices?

    Read the article

  • What are the alternatives to fixed-price or time-and-materials contracts for software development?

    - by Fortuity
    Where can I learn more about pros/cons of various pricing models for software development? Proponents of agile methodology suggest approaches such as multi-stage contracts, target cost contracts, target schedule contracts, shared benefit contracts, variable scope contracts (http://poppendieck.com/agilecontracts.htm). I'm looking for opinions, experience, case studies or informed discussion of these approaches.

    Read the article

  • Is now the right time to move to .NET 4?

    - by bconlon
    The reason I pose this question is that I'm looking at WPF development and so using the latest version seems sensible. However, this means rolling out the .NET 4 runtime to PCs on old versions of the framework. Windows XP is still the number one O/S (estimated 40%+ market share). To run .NET 4 on XP requires Service Pack 3, and although it is good practice to move to the latest service packs, often large companies are slow to keep up due to the extensive testing involved. In fact, .NET 4 is not installed as standard with any Windows O/S as yet - Windows 7 and 2008 Server R2 have 3.5 installed. This is not quite as big an issue as it was for .NET 3.5 as .NET 4 is significantly smaller as it doesn't include the older runtimes - .NET 3.5 SP1 included .NET 3 and .NET 2 and was 250MB, although this was reduced by doing a web install. The size is also reduced a bit if you target the .NET 4 Client Profile, which should be OK for many WPF applications, and I think this may be rolled out as part of Windows service packs soon. But still, if your application is only 4-5 MB and you need 40-50 MB of Framework it is worth consideration before jumping in and using the new shiny features. #

    Read the article

  • Changing time intervals for vSphere performance monitoring, and is there a better way?

    - by user991710
    I have a set of experiments running on a cluster node which is running ESXi 5.1, and I want to monitor the resource consumption on the node itself. Specifically, I am currently running experiments on a subset of the VMs on the ESXi host and wish to monitor resource consumption on those specific VMs. Right now, since I'm using only a single ESXi host, I am using vSphere to access it and the performance reports. Ideally, I would like to get these reports for different time intervals. I can already get the charts for a time interval of 1h, but these are rather long-running experiments and something like 2h, 3h,... would be preferable. However, I cannot seem to change the time interval. Here is an example of what my Customize Performance Chart dialog shows: I am also running on a trial key at the moment. How can I change this interval? Do I need a standard license, or do I just need to turn off the VM (unlikely, but I haven't attempted it yet as these are long-running experiments)? Any help (or pointers to documentation which deals with the above -- I've already looked but did not find much) would be greatly appreciated.

    Read the article

  • What are the alternatives to fixed-price or time-and-materials contracts for software development?

    - by Fortuity
    Where can I learn more about pros/cons of various pricing models for software development? Proponents of agile methodology suggest approaches such as multi-stage contracts, target cost contracts, target schedule contracts, shared benefit contracts, variable scope contracts (http://poppendieck.com/agilecontracts.htm). I'm looking for opinions, experience, case studies or informed discussion of these approaches.

    Read the article

  • Can you be a programmer and Business manager at the same time?

    - by the_knight5000
    Hello all, I think I'm struggled in some situation! We are a new start-up with 5 employees (2 Programmers). I'm the Technical Manager and that was so fine! Now I can see the fingers point to me to take the control of everything, as I've the big vision of what our organization do and play the role of CEO or General Manager! I want to, but I've no idea if it would be risky to our organization to make such a decision? How would managerial interrupts affect the technical productivity? Any tips or previous experience about such situation would help :) Thanks in advance!

    Read the article

  • Get expiration time of a memcache item in php?

    - by Jonatan Littke
    Hey. I'm caching tweets on my site (with 30 min expiration time). When the cache is empty, the first user to find out will repopulate it. However, at that time the Twitter API may return a 200. In that case I'd like to prolong the previous data for another 30 mins. But the previous data will already be lost. So instead I'd like to look into repopulating the cache, say, 5 minutes before expiration time so that I don't lose any date. So how do I know the expiration time of an item when using php's memcache::get()? Also, is there a better way of doing this?

    Read the article

  • Is it better to cut and store all sprites needed from a spritesheet in memory, or cut them out just-in-time?

    - by xLite
    I'm not sure what's best practice here as I have little experience with this. Essentially what I am asking is... if it's better to get your single PNG with all your different sprites on it for use in-game, cut out every sprite on startup and store them in memory, then access the already-cut-out sprite from memory quickly or Only have the single PNG with all the different sprites residing in memory, and when you need, for example, a tree. You cut out the tree from the PNG and then continue to use it as normal. I imagine the former is more CPU friendly than the latter but less memory friendly, vice versa for the latter. I want to know what the norm is for game dev. This is a pixel based game using 2D art. Each PNG is actually an avatar's sprite sheet with each body part separated and then later joined to form the full body of the avatar.

    Read the article

  • Initialize array in amortized constant time -- what is this trick called?

    - by user946850
    There is this data structure that trades performance of array access against the need to iterate over it when clearing it. You keep a generation counter with each entry, and also a global generation counter. The "clear" operation increases the generation counter. On each access, you compare local vs. global generation counters; if they differ, the value is treated as "clean". This has come up in this answer on Stack Overflow recently, but I don't remember if this trick has an official name. Does it? One use case is Dijkstra's algorithm if only a tiny subset of the nodes has to be relaxed, and if this has to be done repeatedly.

    Read the article

  • How can I most accurately calculate the execution time of an ASP.NET page while also displaying it o

    - by henningst
    I want to calculate the execution time of my ASP.NET pages and display it on the page. Currently I'm calculating the execution time using a System.Diagnostics.Stopwatch and then store the value in a log database. The stopwatch is started in OnInit and stopped in OnPreRenderComplete. This seems to be working quite fine, and it's giving a similar execution time as the one shown in the page trace. The problem now is that I'm not able to display the execution time on the page because the stopwatch is stopped too late in the life cycle. What is the best way to do this?

    Read the article

  • Do you know of a C macro to compute Unix time and date?

    - by Alexis Wilke
    I'm wondering if someone knows/has a C macro to compute a static Unix time from a hard coded date and time as in: time_t t = UNIX_TIMESTAMP(2012, 5, 10, 9, 26, 13); I'm looking into that because I want to have a numeric static timestamp. This will be done hundred of times throughout the software, each time with a different date, and I want to make sure it is fast because it will run hundreds of times every second. Converting dates that many times would definitively slow down things (i.e. calling mktime() is slower than having a static number compiled in place, right?) [made an update to try to render this paragraph clearer, Nov 23, 2012] Update I want to clarify the question with more information about the process being used. As my server receives requests, for each request, it starts a new process. That process is constantly updated with new plugins and quite often such updates require a database update. Those must be run only once. To know whether an update is necessary, I want to use a Unix date (which is better than using a counter because a counter is much more likely to break once in a while.) The plugins will thus receive an update signal and have their on_update() function called. There I want to do something like this: void some_plugin::on_update(time_t last_update) { if(last_update < UNIX_TIMESTAMP(2010, 3, 22, 20, 9, 26)) { ...run update... } if(last_update < UNIX_TIMESTAMP(2012, 5, 10, 9, 26, 13)) { ...run update... } // as many test as required... } As you can see, if I have to compute the unix timestamp each time, this could represent thousands of calls per process and if you receive 100 hits a second x 1000 calls, you wasted 100,000 calls when you could have had the compiler compute those numbers once at compile time. Putting the value in a static variable is of no interest because this code will run once per process run. Note that the last_update variable changes depending on the website being hit (it comes from the database.) Code Okay, I got the code now: // helper (Days in February) #define _SNAP_UNIX_TIMESTAMP_FDAY(year) \ (((year) % 400) == 0 ? 29LL : \ (((year) % 100) == 0 ? 28LL : \ (((year) % 4) == 0 ? 29LL : \ 28LL))) // helper (Days in the year) #define _SNAP_UNIX_TIMESTAMP_YDAY(year, month, day) \ ( \ /* January */ static_cast<qint64>(day) \ /* February */ + ((month) >= 2 ? 31LL : 0LL) \ /* March */ + ((month) >= 3 ? _SNAP_UNIX_TIMESTAMP_FDAY(year) : 0LL) \ /* April */ + ((month) >= 4 ? 31LL : 0LL) \ /* May */ + ((month) >= 5 ? 30LL : 0LL) \ /* June */ + ((month) >= 6 ? 31LL : 0LL) \ /* July */ + ((month) >= 7 ? 30LL : 0LL) \ /* August */ + ((month) >= 8 ? 31LL : 0LL) \ /* September */+ ((month) >= 9 ? 31LL : 0LL) \ /* October */ + ((month) >= 10 ? 30LL : 0LL) \ /* November */ + ((month) >= 11 ? 31LL : 0LL) \ /* December */ + ((month) >= 12 ? 30LL : 0LL) \ ) #define SNAP_UNIX_TIMESTAMP(year, month, day, hour, minute, second) \ ( /* time */ static_cast<qint64>(second) \ + static_cast<qint64>(minute) * 60LL \ + static_cast<qint64>(hour) * 3600LL \ + /* year day (month + day) */ (_SNAP_UNIX_TIMESTAMP_YDAY(year, month, day) - 1) * 86400LL \ + /* year */ (static_cast<qint64>(year) - 1970LL) * 31536000LL \ + ((static_cast<qint64>(year) - 1969LL) / 4LL) * 86400LL \ - ((static_cast<qint64>(year) - 1901LL) / 100LL) * 86400LL \ + ((static_cast<qint64>(year) - 1601LL) / 400LL) * 86400LL ) WARNING: Do not use these macros to dynamically compute a date. It is SLOWER than mktime(). This being said, if you have a hard coded date, then the compiler will compute the time_t value at compile time. Slower to compile, but faster to execute over and over again.

    Read the article

  • In a web farm environment, should we base the system date/time to the web servers or the database se

    - by leypascua
    Assuming there are a number of load-balanced web servers in a web farm, is it safe to use DateTime.Now in the application code for getting the system date/time or should we leave this responsibility to the database server? Would there be a chance that machine date/time settings on all servers in the webfarm are out of sync? If date/time will be the responsible of the DBMS, how will this strategy work if we have load-balanced replicated DBs?

    Read the article

  • Why do I have to add a PPA twice (once to add it to the list of repo, second time to fix a BAD GPG)

    - by Luis Alvarado
    I notice the following: I add a ppa using add-apt-repository, for example the wine ppa, mozilla security, nvidia drivers, etc.. When I go to the Update Manager and tell it to CHECK for updates it throws me a PPA error. To solve the error I add the same PPA again. Why do I have to add the PPA again (This also can be done by adding the received key alone with apt-key) but why does this problem happen anyway.

    Read the article

  • Do hiring managers have a hard time accepting developers who have a "business look alike" personal app but are NOT entrepreneurs?

    - by shadesco
    Directly post graduation from University, I decided to build my own web app (Ease My Day) while waiting to get a job as a software Engineer. The reasons to build this app: Gain solid hands on software experience before hitting the job scene Providing a solution to a common problem Not sitting doing nothing while searching for jobs The app is Not an entrepreneurial tryout nor a business to be sold. Still throughout interviews I noticed that at the rate of 4 of each 5 interviews I pass through the app is being confused with a business and I am asked the same questions: Why did you build the business? Why do you want to stop the app? Do you want to sell the app? Knowing that I didn't build a business nor make any income from this application. Do candidates who take initiatives and like to craft their own apps on the side cause a red flag on the hiring manager's radar?

    Read the article

  • What's the current wait time for reconsideration requests for Google's webmaster tools?

    - by chrism2671
    We recently received an unnatural links penalty to our site; a rogue SEO firm did us some serious damage, and we lost 40% of our traffic (hundreds of thousands of users) overnight. The effect on our business has been severe and we're really hoping we making things right. We submitted a reconsideration request but I'm wondering how long I should forecast for an outcome, as it will have a knock on effect for our business.

    Read the article

  • UI design for screen which has number of data entry textFields.

    - by shilpa
    Hi all, my app is an e-commerce application. During the checkout process, I have "recipient" and "credit card" screens. In the recipient screen, there are number of fields for address entry, which makes it very clumsy. Same in the case of the credit card screen, where the user is asked to enter their credit card info and billing address. Can anyone suggest how to design these two pages?

    Read the article

  • How to verify the code that could take a substantial time to compile? [on hold]

    - by user18404
    As a follow up to my prev question: What is the best aproach for coding in a slow compilation environment To recap: I am stuck with a large software system with which a TDD ideology of "test often" does not work. And to make it even worse the features like pre-compiled headers/multi-threaded compilation/incremental linking, etc is not available to me - hence I think that the best way out would be to add the extensive logging into the system and to start "coding in large chunks", which I understand as code for a two-three hours first (as opposed to 15-20 mins in TDD) - thoroughly eyeball the code for a 15 minutes and only after all that do the compilation and run the tests. As I have been doing TDD for a quite a while, my code eyeballing / code verification skills got rusty (you don't really need this that much if you can quickly verify what you've done in 5 seconds by running a test or two) - so I am after a recommendations on how to learn these source code verification/error spotting skills again. I know I was able to do that easily some 5-10 years ago when I din't have much support from the compiler/unit testing tools I had until recently, thus there should be a way to get back to the basics.

    Read the article

< Previous Page | 398 399 400 401 402 403 404 405 406 407 408 409  | Next Page >