Search Results

Search found 529 results on 22 pages for 'quantum jumping'.

Page 13/22 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • What is the most effective approach to learn an unfamiliar complex program? [closed]

    - by bdroc
    Possible Duplicate: How do you dive into large code bases? I have quite a bit of experience with different programming languages and writing small and functional programs for a variety of purposes. My coding skills aren't what I have a problem with. In fact, I've written a decent web application from scratch for my startup. However, I have trouble jumping into unfamiliar applications. What's the most effective way to approach learning a new program's structure and/or architecture so that I can start attacking the code effectively? Are there useful tools for their respective languages (Python and Java are my two primary languages)? Should I be starting with just looking at function names or documentation? How do you veterans approach this problem? I find this has to be with minimal help from coworkers or contributors who are already familiar with the application and have better things to do than help me. I'd love to practice this skill in an open source project so any suggestions for starting points (maybe mildly complex) would be great too!

    Read the article

  • Game logic dynamically extendable architecture implementation patterns

    - by Vlad
    When coding games there are a lot of cases when you need to inject your logic into existing class dynamically and without making unnecessary dependencies. For an example I have a Rabbit which can be affected by freeze ability so it can't jump. It could be implemented like this: class Rabbit { public bool CanJump { get; set; } void Jump() { if (!CanJump) return; ... } } But If I have more than one ability that can prevent it from jumping? I can't just set one property because some circumstances can be activated simultanously. Another solution? class Rabbit { public bool Frozen { get; set; } public bool InWater { get; set; } bool CanJump { get { return !Frozen && !InWater; } } } Bad. The Rabbit class can't know all the circumstances it can run into. Who knows what else will game designer want to add: may be an ability that changes gravity on an area? May be make a stack of bool values for CanJump property? No, because abilities can be deactivated not in that order in which they were activated. I need a way to seperate ability logic that prevent the Rabbit from jumping from the Rabbit itself. One possible solution for this is making special checking event: class Rabbit { class CheckJumpEventArgs : EventArgs { public bool Veto { get; set; } } public event EventHandler<CheckJumpEvent> OnCheckJump; void Jump() { var args = new CheckJumpEventArgs(); if (OnCheckJump != null) OnCheckJump(this, args); if (!args.Veto) return; ... } } But it's a lot of code! A real Rabbit class would have a lot of properties like this (health and speed attributes, etc). I'm thinking of borrowing something from MVVM pattern where you have all the properties and methods of an object implemented in a way where they can be easily extended from outside. Then I want to use it like this: class FreezeAbility { void ActivateAbility() { _rabbit.CanJump.Push(ReturnFalse); } void DeactivateAbility() { _rabbit.CanJump.Remove(ReturnFalse); } // should be implemented as instance member // so it can be "unsubscribed" bool ReturnFalse(bool previousValue) { return false; } } Is this approach good? What also should I consider? What are other suitable options and patterns? Any ready to use solutions? UPDATE The question is not about how to add different behaviors to an object dynamically but how its (or its behavior) implementation can be extended with external logic. I don't need to add a different behavior but I need a way to modify an exitsing one and I also need a possibiliity to undo changes.

    Read the article

  • SQL SERVER – SOS_SCHEDULER_YIELD – Wait Type – Day 8 of 28

    - by pinaldave
    This is a very interesting wait type and quite often seen as one of the top wait types. Let us discuss this today. From Book On-Line: Occurs when a task voluntarily yields the scheduler for other tasks to execute. During this wait the task is waiting for its quantum to be renewed. SOS_SCHEDULER_YIELD Explanation: SQL Server has multiple threads, and the basic working methodology for SQL Server is that SQL Server does not let any “runnable” thread to starve. Now let us assume SQL Server OS is very busy running threads on all the scheduler. There are always new threads coming up which are ready to run (in other words, runnable). Thread management of the SQL Server is decided by SQL Server and not the operating system. SQL Server runs on non-preemptive mode most of the time, meaning the threads are co-operative and can let other threads to run from time to time by yielding itself. When any thread yields itself for another thread, it creates this wait. If there are more threads, it clearly indicates that the CPU is under pressure. You can fun the following DMV to see how many runnable task counts there are in your system. SELECT scheduler_id, current_tasks_count, runnable_tasks_count, work_queue_count, pending_disk_io_count FROM sys.dm_os_schedulers WHERE scheduler_id < 255 GO If you notice a two-digit number in runnable_tasks_count continuously for long time (not once in a while), you will know that there is CPU pressure. The two-digit number is usually considered as a bad thing; you can read the description of the above DMV over here. Additionally, there are several other counters (%Processor Time and other processor related counters), through which you can refer to so you can validate CPU pressure along with the method explained above. Reducing SOS_SCHEDULER_YIELD wait: This is the trickiest part of this procedure. As discussed, this particular wait type relates to CPU pressure. Increasing more CPU is the solution in simple terms; however, it is not easy to implement this solution. There are other things that you can consider when this wait type is very high. Here is the query where you can find the most expensive query related to CPU from the cache Note: The query that used lots of resources but is not cached will not be caught here. SELECT SUBSTRING(qt.TEXT, (qs.statement_start_offset/2)+1, ((CASE qs.statement_end_offset WHEN -1 THEN DATALENGTH(qt.TEXT) ELSE qs.statement_end_offset END - qs.statement_start_offset)/2)+1), qs.execution_count, qs.total_logical_reads, qs.last_logical_reads, qs.total_logical_writes, qs.last_logical_writes, qs.total_worker_time, qs.last_worker_time, qs.total_elapsed_time/1000000 total_elapsed_time_in_S, qs.last_elapsed_time/1000000 last_elapsed_time_in_S, qs.last_execution_time, qp.query_plan FROM sys.dm_exec_query_stats qs CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) qt CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qp ORDER BY qs.total_worker_time DESC -- CPU time You can find the most expensive queries that are utilizing lots of CPU (from the cache) and you can tune them accordingly. Moreover, you can find the longest running query and attempt to tune them if there is any processor offending code. Additionally, pay attention to total_worker_time because if that is also consistently higher, then  the CPU under too much pressure. You can also check perfmon counters of compilations as they tend to use good amount of CPU. Index rebuild is also a CPU intensive process but we should consider that main cause for this query because that is indeed needed on high transactions OLTP system utilized to reduce fragmentations. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All of the discussions of Wait Stats in this blog is generic and varies from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • The Art of Productivity

    - by dwahlin
    Getting things done has always been a challenge regardless of gender, age, race, skill, or job position. No matter how hard some people try, they end up procrastinating tasks until the last minute. Some people simply focus better when they know they’re out of time and can’t procrastinate any longer. How many times have you put off working on a term paper in school until the very last minute? With only a few hours left your mental energy and focus seem to kick in to high gear especially as you realize that you either get the paper done now or risk failing. It’s amazing how a little pressure can turn into a motivator and allow our minds to focus on a given task. Some people seem to specialize in procrastinating just about everything they do while others tend to be the “doers” who get a lot done and ultimately rise up the ladder at work. What’s the difference between these types of people? Is it pure laziness or are other factors at play? I think that some people are certainly more motivated than others, but I also think a lot of it is based on the process that “doers” tend to follow - whether knowingly or unknowingly. While I’ve certainly fought battles with procrastination, I’ve always had a knack for being able to get a lot done in a relatively short amount of time. I think a lot of my “get it done” attitude goes back to the the strong work ethic my parents instilled in me at a young age. I remember my dad saying, “You need to learn to work hard!” when I was around 5 years old. I remember that moment specifically because I was on a tractor with him the first time I heard it while he was trying to move some large rocks into a pile. The tractor was big but so were the rocks and my dad had to balance the tractor perfectly so that it didn’t tip forward too far. It was challenging work and somewhat tedious but my dad finished the task and taught me a few important lessons along the way including persistence, the importance of having a skill, and getting the job done right without skimping along the way. In this post I’m going to list a few of the techniques and processes I follow that I hope may be beneficial to others. I blogged about the general concept back in 2009 but thought I’d share some updated information and lessons learned since then. Most of the ideas that follow came from learning and refining my daily work process over the years. However, since most of the ideas are common sense (at least in my opinion), I suspect they can be found in other productivity processes that are out there. Let’s start off with one of the most important yet simple tips: Start Each Day with a List. Start Each Day with a List What are you planning to get done today? Do you keep track of everything in your head or rely on your calendar? While most of us think that we’re pretty good at managing “to do” lists strictly in our head you might be surprised at how affective writing out lists can be. By writing out tasks you’re forced to focus on the most important tasks to accomplish that day, commit yourself to those tasks, and have an easy way to track what was supposed to get done and what actually got done. Start every morning by making a list of specific tasks that you want to accomplish throughout the day. I’ll even go so far as to fill in times when I’d like to work on tasks if I have a lot of meetings or other events tying up my calendar on a given day. I’m not a big fan of using paper since I type a lot faster than I write (plus I write like a 3rd grader according to my wife), so I use the Sticky Notes feature available in Windows. Here’s an example of yesterday’s sticky note: What do you add to your list? That’s the subject of the next tip. Focus on Small Tasks It’s no secret that focusing on small, manageable tasks is more effective than trying to focus on large and more vague tasks. When you make your list each morning only add tasks that you can accomplish within a given time period. For example, if I only have 30 minutes blocked out to work on an article I don’t list “Write Article”. If I do that I’ll end up wasting 30 minutes stressing about how I’m going to get the article done in 30 minutes and ultimately get nothing done. Instead, I’ll list something like “Write Introductory Paragraphs for Article”. The next day I may add, “Write first section of article” or something that’s small and manageable – something I’m confident that I can get done. You’ll find that once you’ve knocked out several smaller tasks it’s easy to continue completing others since you want to keep the momentum going. In addition to keeping my tasks focused and small, I also make a conscious effort to limit my list to 4 or 5 tasks initially. I’ve found that if I list more than 5 tasks I feel a bit overwhelmed which hurts my productivity. It’s easy to add additional tasks as you complete others and you get the added benefit of that confidence boost of knowing that you’re being productive and getting things done as you remove tasks and add others. Getting Started is the Hardest (Yet Easiest) Part I’ve always found that getting started is the hardest part and one of the biggest contributors to procrastination. Getting started working on tasks is a lot like getting a large rock pushed to the bottom of a hill. It’s difficult to get the rock rolling at first, but once you manage to get it rocking some it’s really easy to get it rolling on its way to the bottom. As an example, I’ve written 100s of articles for technical magazines over the years and have really struggled with the initial introductory paragraphs. Keep in mind that these are the paragraphs that don’t really add that much value (in my opinion anyway). They introduce the reader to the subject matter and nothing more. What a waste of time for me to sit there stressing about how to start the article. On more than one occasion I’ve spent more than an hour trying to come up with 2-3 paragraphs of text.  Talk about a productivity killer! Whether you’re struggling with a writing task, some code for a project, an email, or other tasks, jumping in without thinking too much is the best way to get started I’ve found. I’m not saying that you shouldn’t have an overall plan when jumping into a task, but on some occasions you’ll find that if you simply jump into the task and stop worrying about doing everything perfectly that things will flow more smoothly. For my introductory paragraph problem I give myself 5 minutes to write out some general concepts about what I know the article will cover and then spend another 10-15 minutes going back and refining that information. That way I actually have some ideas to work with rather than a blank sheet of paper. If I still find myself struggling I’ll write the rest of the article first and then circle back to the introductory paragraphs once I’m done. To sum this tip up: Jump into a task without thinking too hard about it. It’s better to to get the rock at the top of the hill rocking some than doing nothing at all. You can always go back and refine your work.   Learn a Productivity Technique and Stick to It There are a lot of different productivity programs and seminars out there being sold by companies. I’ve always laughed at how much money people spend on some of these motivational programs/seminars because I think that being productive isn’t that hard if you create a re-useable set of steps and processes to follow. That’s not to say that some of these programs/seminars aren’t worth the money of course because I know they’ve definitely benefited some people that have a hard time getting things done and staying focused. One of the best productivity techniques I’ve ever learned is called the “Pomodoro Technique” and it’s completely free. This technique is an extremely simple way to manage your time without having to remember a bunch of steps, color coding mechanisms, or other processes. The technique was originally developed by Francesco Cirillo in the 80s and can be implemented with a simple timer. In a nutshell here’s how the technique works: Pick a task to work on Set the timer to 25 minutes and work on the task Once the timer rings record your time Take a 5 minute break Repeat the process Here’s why the technique works well for me: It forces me to focus on a single task for 25 minutes. In the past I had no time goal in mind and just worked aimlessly on a task until I got interrupted or bored. 25 minutes is a small enough chunk of time for me to stay focused. Any distractions that may come up have to wait until after the timer goes off. If the distraction is really important then I stop the timer and record my time up to that point. When the timer is running I act as if I only have 25 minutes total for the task (like you’re down to the last 25 minutes before turning in your term paper….frantically working to get it done) which helps me stay focused and turns into a “beat the clock” type of game. It’s actually kind of fun if you treat it that way and really helps me focus on a the task at hand. I automatically know how much time I’m spending on a given task (more on this later) by using this technique. I know that I have 5 minutes after each pomodoro (the 25 minute sprint) to waste on anything I’d like including visiting a website, stepping away from the computer, etc. which also helps me stay focused when the 25 minute timer is counting down. I use this technique so much that I decided to build a program for Windows 8 called Pomodoro Focus (I plan to blog about how it was built in a later post). It’s a Windows Store application that allows people to track tasks, productive time spent on tasks, interruption time experienced while working on a given task, and the number of pomodoros completed. If a time estimate is given when the task is initially created, Pomodoro Focus will also show the task completion percentage. I like it because it allows me to track my tasks, time spent on tasks (very useful in the consulting world), and even how much time I wasted on tasks (pressing the pause button while working on a task starts the interruption timer). I recently added a new feature that charts productive and interruption time for tasks since I wanted to see how productive I was from week to week and month to month. A few screenshots from the Pomodoro Focus app are shown next, I had a lot of fun building it and use it myself to as I work on tasks.   There are certainly many other productivity techniques and processes out there (and a slew of books describing them), but the Pomodoro Technique has been the simplest and most effective technique I’ve ever come across for staying focused and getting things done.   Persistence is Key Getting things done is great but one of the biggest lessons I’ve learned in life is that persistence is key especially when you’re trying to get something done that at times seems insurmountable. Small tasks ultimately lead to larger tasks getting accomplished, however, it’s not all roses along the way as some of the smaller tasks may come with their own share of bumps and bruises that lead to discouragement about the end goal and whether or not it is worth achieving at all. I’ve been on several long-term projects over my career as a software developer (I have one personal project going right now that fits well here) and found that repeating, “Persistence is the key!” over and over to myself really helps. Not every project turns out to be successful, but if you don’t show persistence through the hard times you’ll never know if you succeeded or not. Likewise, if you don’t persistently stick to the process of creating a daily list, follow a productivity process, etc. then the odds of consistently staying productive aren’t good.   Track Your Time How much time do you actually spend working on various tasks? If you don’t currently track time spent answering emails, on phone calls, and working on various tasks then you might be surprised to find out that a task that you thought was going to take you 30 minutes ultimately ended up taking 2 hours. If you don’t track the time you spend working on tasks how can you expect to learn from your mistakes, optimize your time better, and become more productive? That’s another reason why I like the Pomodoro Technique – it makes it easy to stay focused on tasks while also tracking how much time I’m working on a given task.   Eliminate Distractions I blogged about this final tip several years ago but wanted to bring it up again. If you want to be productive (and ultimately successful at whatever you’re doing) then you can’t waste a lot of time playing games or on Twitter, Facebook, or other time sucking websites. If you see an article you’re interested in that has no relation at all to the tasks you’re trying to accomplish then bookmark it and read it when you have some spare time (such as during a pomodoro break). Fighting the temptation to check your friends’ status updates on Facebook? Resist the urge and realize how much those types of activities are hurting your productivity and taking away from your focus. I’ll admit that eliminating distractions is still tough for me personally and something I have to constantly battle. But, I’ve made a conscious decision to cut back on my visits and updates to Facebook, Twitter, Google+ and other sites. Sure, my Klout score has suffered as a result lately, but does anyone actually care about those types of scores aside from your online “friends” (few of whom you’ve actually met in person)? :-) Ultimately it comes down to self-discipline and how badly you want to be productive and successful in your career, life goals, hobbies, or whatever you’re working on. Rather than having your homepage take you to a time wasting news site, game site, social site, picture site, or others, how about adding something like the following as your homepage? Every time your browser opens you’ll see a personal message which helps keep you on the right track. You can download my ubber-sophisticated homepage here if interested. Summary Is there a single set of steps that if followed can ultimately lead to productivity? I don’t think so since one size has never fit all. Every person is different, works in their own unique way, and has their own set of motivators, distractions, and more. While I certainly don’t consider myself to be an expert on the subject of productivity, I do think that if you learn what steps work best for you and gradually refine them over time that you can come up with a personal productivity process that can serve you well. Productivity is definitely an “art” that anyone can learn with a little practice and persistence. You’ve seen some of the steps that I personally like to follow and I hope you find some of them useful in boosting your productivity. If you have others you use please leave a comment. I’m always looking for ways to improve.

    Read the article

  • Is this simple XOR encrypted communication absolutely secure?

    - by user3123061
    Say Alice have 4GB USB flash memory and Peter also have 4GB USB flash memory. They once meet and save on both of memories two files named alice_to_peter.key (2GB) and peter_to_alice.key (2GB) which is randomly generated bits. Then they never meet again and communicate electronicaly. Alice also maintains variable called alice_pointer and Peter maintains variable called peter_pointer which is both initially set to zero. Then when Alice needs to send message to Peter they do: encrypted_message_to_peter[n] = message_to_peter[n] XOR alice_to_peter.key[alice_pointer + n] Where n i n-th byte of message. Then alice_pointer is attached at begining of the encrypted message and (alice_pointer + encrypted message) is sent to Peter and then alice_pointer is incremented by length of message (and for maximum security can be used part of key erased) Peter receives encrypted_message, reads alice_pointer stored at beginning of message and do this: message_to_peter[n] = encrypted_message_to_peter[n] XOR alice_to_peter.key[alice_pointer + n] And for maximum security after reading of message also erases used part of key. - EDIT: In fact this step with this simple algorithm (without integrity check and authentication) decreases security, see Paulo Ebermann post below. When Peter needs to send message to Alice they do analogical steps with peter_to_alice.key and with peter_pointer. With this trivial schema they can send for next 50 years each day 2GB / (50 * 365) = cca 115kB of encrypted data in both directions. If they need more data to send, they simple use larger memory for keys for example with today 2TB harddiscs (1TB keys) is possible to exchange next 50years 60MB/day ! (thats practicaly lots of data for example with using compression its more than hour of high quality voice communication) It Seems to me there is no way for attacker to read encrypted message without keys even if they have infinitely fast computer. because even with infinitely fast computer with brute force they get ever possible message that can fit to length of message, but this is astronomical amount of messages and attacker dont know which of them is actual message. I am right? Is this communication schema really absolutely secure? And if its secure, has this communication method its own name? (I mean XOR encryption is well-known, but whats name of this concrete practical application with use large memories at both communication sides for keys? I am humbly expecting that this application has been invented someone before me :-) ) Note: If its absolutely secure then its amazing because with today low cost large memories it is practicaly much cheeper way of secure communication than expensive quantum cryptography and with equivalent security! EDIT: I think it will be more and more practical in future with lower a lower cost of memories. It can solve secure communication forever. Today you have no certainty if someone succesfuly atack to existing ciphers one year later and make its often expensive implementations unsecure. In many cases before comunication exist step where communicating sides meets personaly, thats time to generate large keys. I think its perfect for military communication for example for communication with submarines which can have installed harddrive with large keys and military central can have harddrive for each submarine they have. It can be also practical in everyday life for example for control your bank account because when you create your account you meet with bank etc.

    Read the article

  • Feynman's inbox

    - by user12607414
    Here is Richard Feynman writing on the ease of criticizing theories, and the difficulty of forming them: The problem is not just to say something might be wrong, but to replace it by something — and that is not so easy. As soon as any really definite idea is substituted it becomes almost immediately apparent that it does not work. The second difficulty is that there is an infinite number of possibilities of these simple types. It is something like this. You are sitting working very hard, you have worked for a long time trying to open a safe. Then some Joe comes along who knows nothing about what you are doing, except that you are trying to open the safe. He says ‘Why don’t you try the combination 10:20:30?’ Because you are busy, you have tried a lot of things, maybe you have already tried 10:20:30. Maybe you know already that the middle number is 32 not 20. Maybe you know as a matter of fact that it is a five digit combination… So please do not send me any letters trying to tell me how the thing is going to work. I read them — I always read them to make sure that I have not already thought of what is suggested — but it takes too long to answer them, because they are usually in the class ‘try 10:20:30’. (“Seeking New Laws”, page 161 in The Character of Physical Law.) As a sometime designer (and longtime critic) of widely used computer systems, I have seen similar difficulties appear when anyone undertakes to publicly design a piece of software that may be used by many thousands of customers. (I have been on both sides of the fence, of course.) The design possibilities are endless, but the deep design problems are usually hidden beneath a mass of superfluous detail. The sheer numbers can be daunting. Even if only one customer out of a thousand feels a need to express a passionately held idea, it can take a long time to read all the mail. And it is a fact of life that many of those strong suggestions are only weakly supported by reason or evidence. Opinions are plentiful, but substantive research is time-consuming, and hence rare. A related phenomenon commonly seen with software is bike-shedding, where interlocutors focus on surface details like naming and syntax… or (come to think of it) like lock combinations. On the other hand, software is easier than quantum physics, and the population of people able to make substantial suggestions about software systems is several orders of magnitude bigger than Feynman’s circle of colleagues. My own work would be poorer without contributions — sometimes unsolicited, sometimes passionately urged on me — from the open source community. If a Nobel prize winner thought it was worthwhile to read his mail on the faint chance of learning a good idea, I am certainly not going to throw mine away. (In case anyone is still reading this, and is wondering what provoked a meditation on the quality of one’s inbox contents, I’ll simply point out that the volume has been very high, for many months, on the Lambda-Dev mailing list, where the next version of the Java language is being discussed. Bravo to those of my colleagues who are surfing that wave.) I started this note thinking there was an odd parallel between the life of the physicist and that of a software designer. On second thought, I’ll bet that is the story for anybody who works in public on something requiring special training. (And that would be pretty much anything worth doing.) In any case, Feynman saw it clearly and said it well.

    Read the article

  • High Resolution Timeouts

    - by user12607257
    The default resolution of application timers and timeouts is now 1 msec in Solaris 11.1, down from 10 msec in previous releases. This improves out-of-the-box performance of polling and event based applications, such as ticker applications, and even the Oracle rdbms log writer. More on that in a moment. As a simple example, the poll() system call takes a timeout argument in units of msec: System Calls poll(2) NAME poll - input/output multiplexing SYNOPSIS int poll(struct pollfd fds[], nfds_t nfds, int timeout); In Solaris 11, a call to poll(NULL,0,1) returns in 10 msec, because even though a 1 msec interval is requested, the implementation rounds to the system clock resolution of 10 msec. In Solaris 11.1, this call returns in 1 msec. In specification lawyer terms, the resolution of CLOCK_REALTIME, introduced by POSIX.1b real time extensions, is now 1 msec. The function clock_getres(CLOCK_REALTIME,&res) returns 1 msec, and any library calls whose man page explicitly mention CLOCK_REALTIME, such as nanosleep(), are subject to the new resolution. Additionally, many legacy functions that pre-date POSIX.1b and do not explicitly mention a clock domain, such as poll(), are subject to the new resolution. Here is a fairly comprehensive list: nanosleep pthread_mutex_timedlock pthread_mutex_reltimedlock_np pthread_rwlock_timedrdlock pthread_rwlock_reltimedrdlock_np pthread_rwlock_timedwrlock pthread_rwlock_reltimedwrlock_np mq_timedreceive mq_reltimedreceive_np mq_timedsend mq_reltimedsend_np sem_timedwait sem_reltimedwait_np poll select pselect _lwp_cond_timedwait _lwp_cond_reltimedwait semtimedop sigtimedwait aiowait aio_waitn aio_suspend port_get port_getn cond_timedwait cond_reltimedwait setitimer (ITIMER_REAL) misc rpc calls, misc ldap calls This change in resolution was made feasible because we made the implementation of timeouts more efficient a few years back when we re-architected the callout subsystem of Solaris. Previously, timeouts were tested and expired by the kernel's clock thread which ran 100 times per second, yielding a resolution of 10 msec. This did not scale, as timeouts could be posted by every CPU, but were expired by only a single thread. The resolution could be changed by setting hires_tick=1 in /etc/system, but this caused the clock thread to run at 1000 Hz, which made the potential scalability problem worse. Given enough CPUs posting enough timeouts, the clock thread could be a performance bottleneck. We fixed that by re-implementing the timeout as a per-CPU timer interrupt (using the cyclic subsystem, for those familiar with Solaris internals). This decoupled the clock thread frequency from timeout resolution, and allowed us to improve default timeout resolution without adding CPU overhead in the clock thread. Here are some exceptions for which the default resolution is still 10 msec. The thread scheduler's time quantum is 10 msec by default, because preemption is driven by the clock thread (plus helper threads for scalability). See for example dispadmin, priocntl, fx_dptbl, rt_dptbl, and ts_dptbl. This may be changed using hires_tick. The resolution of the clock_t data type, primarily used in DDI functions, is 10 msec. It may be changed using hires_tick. These functions are only used by developers writing kernel modules. A few functions that pre-date POSIX CLOCK_REALTIME mention _SC_CLK_TCK, CLK_TCK, "system clock", or no clock domain. These functions are still driven by the clock thread, and their resolution is 10 msec. They include alarm, pcsample, times, clock, and setitimer for ITIMER_VIRTUAL and ITIMER_PROF. Their resolution may be changed using hires_tick. Now back to the database. How does this help the Oracle log writer? Foreground processes post a redo record to the log writer, which releases them after the redo has committed. When a large number of foregrounds are waiting, the release step can slow down the log writer, so under heavy load, the foregrounds switch to a mode where they poll for completion. This scales better because every foreground can poll independently, but at the cost of waiting the minimum polling interval. That was 10 msec, but is now 1 msec in Solaris 11.1, so the foregrounds process transactions faster under load. Pretty cool.

    Read the article

  • Designer serialization persistence problem in .NET, Windows Forms

    - by Jules
    ETA: I have a similar, smaller, problem here which, I suspect, is related to this problem. I have a class which has a readonly property that holds a collection of components (* not quite, see below). At design time, it's possible to select from the components on the design surface to add to the collection. (Think imagelist, but instead of selecting one, you can select as many as you want.) As a test, I inherit from button and attach my class to it as a property. The persistence problem occurs when I add a component,to the collection, from the design surface after I have added my button to the form. The best way to demonstrate this is to show you the designer generated code: Private Sub InitializeComponent() Dim Provider1 As WindowsApplication1.Provider = New WindowsApplication1.Provider Me.MyComponent2 = New WindowsApplication1.MyComponent Me.MyComponent1 = New WindowsApplication1.MyComponent Me.MyButton1 = New WindowsApplication1.MyButton Me.MyComponent3 = New WindowsApplication1.MyComponent Me.SuspendLayout() ' 'MyButton1 ' Me.MyButton1.ProviderCollection.Add(Me.MyButton1.InternalProvider) Me.MyButton1.ProviderCollection.Add(Me.MyComponent1.Provider) Me.MyButton1.ProviderCollection.Add(Me.MyComponent2.Provider) Me.MyButton1.ProviderCollection.Add(Provider1) //Wrong should be Me.MyComponent3.Provider ' 'Form1 ' Me.Controls.Add(Me.MyButton1) End Sub Friend WithEvents MyComponent1 As WindowsApplication1.MyComponent Friend WithEvents MyComponent2 As WindowsApplication1.MyComponent Friend WithEvents MyButton1 As WindowsApplication1.MyButton Friend WithEvents MyComponent3 As WindowsApplication1.MyComponent End Class As you can see from the code, the collection is not actually a collection of the components, but a collection of a property, 'Provider', from the components. It looks like the problem is occurring because MyComponent3 is created after MyButton. However, in my opinion, this should not make any difference - by the time the serializer comes to add the provider property of MyComponent3, it's already created. Note: You may wonder, why I'm not using AddRange to persist the collection. The reason for this is that if I do, the behaviour changes and none of the items will persist correctly. The designer will create local fields - like Provider1 - for each item in the collection. However if I add another collection to the class which holds the actual MyComponents and persist this, then, somehow, the AddRange method persists correctly in ProviderCollection! There seems to be some kind of quantum double slit experiment going down in code dom. How can I solve this problem?

    Read the article

  • Wait on multiple condition variables on Linux without unnecessary sleeps?

    - by Joseph Garvin
    I'm writing a latency sensitive app that in effect wants to wait on multiple condition variables at once. I've read before of several ways to get this functionality on Linux (apparently this is builtin on Windows), but none of them seem suitable for my app. The methods I know of are: Have one thread wait on each of the condition variables you want to wait on, which when woken will signal a single condition variable which you wait on instead. Cycling through multiple condition variables with a timed wait. Writing dummy bytes to files or pipes instead, and polling on those. #1 & #2 are unsuitable because they cause unnecessary sleeping. With #1, you have to wait for the dummy thread to wake up, then signal the real thread, then for the real thread to wake up, instead of the real thread just waking up to begin with -- the extra scheduler quantum spent on this actually matters for my app, and I'd prefer not to have to use a full fledged RTOS. #2 is even worse, you potentially spend N * timeout time asleep, or your timeout will be 0 in which case you never sleep (endlessly burning CPU and starving other threads is also bad). For #3, pipes are problematic because if the thread being 'signaled' is busy or even crashes (I'm in fact dealing with separate process rather than threads -- the mutexes and conditions would be stored in shared memory), then the writing thread will be stuck because the pipe's buffer will be full, as will any other clients. Files are problematic because you'd be growing it endlessly the longer the app ran. Is there a better way to do this? Curious for answers appropriate for Solaris as well.

    Read the article

  • Entangled text boxes

    - by user38329
    Hi StackOverflow, A mere Windows textbox greatly surprised me today. I have two unrelated text boxes inside an application. I can type in either text box and switch the focus by clicking on them. Then happens some event X, which I can't describe here for reasons given below. After this event happens, the two text boxes become "entangled" in an almost quantum way. Say, text box A was focused before X happened. When I click text box B to type in some text, the new text appears in text box A, whereas the blinking cursor happily moves along in text box B through the void, as if the text were there. No amount of clicking on either text boxes can resolve this. The cursor will always remain in B, whereas the text will always go to A. Message spying reveals that after the event X, the text boxes lose the ability to lose or gain focus. When I click on B, WM_LOSE_FOCUS does not come to A, and WM_SET_FOCUS does not come to B. (The rectangles and visibility of the boxes are OK.) The same thing happens in Windows XP and Windows 7. Now, event X: it's a big event in a third-party UI library which I cannot reverse-engineer in a timely manner. (Namely, docking a pane in wxAUI.) I am sure that this behavior is the result of incorrect WinAPI calls to the text boxes (garbage in - garbage out). I would like to know what could possibly cause such "textbox trip" to know where to start looking for the bug. Thanks!

    Read the article

  • Classical task-scheduling assignment

    - by Bruno
    I am working on a flight scheduling app (disclaimer: it's for a college project, so no code answers, please). Please read this question w/ a quantum of attention before answering as it has a lot of peculiarities :( First, some terminology issues: You have planes and flights, and you have to pair them up. For simplicity's sake, we'll assume that a plane is free as soon as the flight using it prior lands. Flights are seen as tasks: They have a duration They have dependencies They have an expected date/time for beginning Planes can be seen as resources to be used by tasks (or flights, in our terminology). Flights have a specific type of plane needed. e.g. flight 200 needs a plane of type B. Planes obviously are of one and only one specific type, e.g., Plane Airforce One is of type C. A "project" is the set of all the flights by an airline in a given time period. The functionality required is: Finding the shortest possible duration for a said project The earliest and latest possible start for a task (flight) The critical tasks, with basis on provided data, complete with identifiers of preceding tasks. Automatically pair up flights and planes, so as to get all flights paired up with a plane. (Note: the duration of flights is fixed) Get a Gantt diagram with the projects scheduling, in which all flights begin as early as possible, showing all previously referred data graphically (dependencies, time info, etc.) So the questions is: How in the world do I achieve this? Particularly: We are required to use a graph. What do the graph's edges and nodes respectively symbolise? Are we required to discard tasks to achieve the critical tasks set? If you could also recommend some algorithms for us to look up, that'd be great.

    Read the article

  • Import CSV to class structure as the user defines

    - by Assimilater
    I have a contact manager program and I would like to offer the feature to import csv files. The problem is that different data sources order the fields in different ways. I thought of programming an interface for the user to tell it the field order and how to handle exceptions. Here is an example line in one of many possible field orders: "ID#","Name","Rank","Address1","Address2","City","State","Country","Zip","Phone#","Email","Join Date","Sponsor ID","Sponsor Name" "Z1234","Call, Anson","STU","1234 E. 6578 S.","","Somecity","TX","United States","012345","000-000-0000","[email protected]","5/24/2010","z12343","Quantum Independence" Notice that in one data field "Name" there is a comma to separate last name and first name and in another there is not. My plan is to have a line for each field (ie ID, Name, City etc.) and a statement "import to" and list box with options like: Don't Import, BusinessJoin Date, First Name, Zip and the program recognizes those as properties of an object... I'd also like the user to be able to record preset field orders so they can re-use them for csv files from the same download source. Then I also need it to check if a record all ready exists (is there a record for Anson Call all ready?) and allow the user to tell it what to do if there is a record (ie mailing address may have changes, so if that field is filled overwrite it, or this mailing address is invalid, leave the current data untouched for this person, overwrite the rest). While I'm capable of coding this...i'm not very excited about it and I'm wondering if there's a tool or set of tools out there to all ready perform most of this functionality... I hope this makes sense...

    Read the article

  • Do you ever make a code change and just test rather than trying to fully understand the change you'v

    - by Clay Nichols
    I'm working in a 12 year old code base which I have been the only developer on. There are times that I'll make a a very small change based on an intuition (or quantum leap in logic ;-). Usually I try to deconstruct that change and make sure I read thoroughly the code. However sometimes, (more and more these days) I just test and make sure it had the effect I wanted. (I'm a pretty thorough tester and would test even if I read the code). This works for me and we have surprisingly (compared to most software I see) few bugs escape into the wild. But what I'm wondering is whether this is just the "art" side of coding. Yes, in an ideal world you would exhaustively read every bit of code that your change modified, but I in practice, if you're confident that it only affects a small section of code, is this a common practice? I can obviously see where this would be a disastrous approach in the hands of a poor programmer. But then, I've seen programmers who ostensibly are reading the code and break stuff left and right (in their own code based which only they have been working on).

    Read the article

  • Parallel programming, are we not learning from history again?

    - by mezmo
    I started programming because I was a hardware guy that got bored, I thought the problems being solved in the software side of things were much more interesting than those in hardware. At that time, most of the electrical buses I dealt with were serial, some moving data as fast as 1.5 megabit!! ;) Over the years these evolved into parallel buses in order to speed communication up, after all, transferring 8/16/32/64, whatever bits at a time incredibly speeds up the transfer. Well, our ability to create and detect state changes got faster and faster, to the point where we could push data so fast that interference between parallel traces or cable wires made cleaning the signal too expensive to continue, and we still got reasonable performance from serial interfaces, heck some graphics interfaces are even happening over USB for a while now. I think I'm seeing a like trend in software now, our processors were getting faster and faster, so we got good at building "serial" software. Now we've hit a speed bump in raw processor speed, so we're adding cores, or "traces" to the mix, and spending a lot of time and effort on learning how to properly use those. But I'm also seeing what I feel are advances in things like optical switching and even quantum computing that could take us far more quickly that I was expecting back to the point where "serial programming" again makes the most sense. What are your thoughts?

    Read the article

  • How to scroll to the bottom of a UITableView on the iPhone before the view appears

    - by acqu13sce
    I have a UITableView that is populated with cells of a variable height. I would like the table to scroll to the bottom when the view is pushed into view. I currently have the following function NSIndexPath *indexPath = [NSIndexPath indexPathForRow:[log count]-1 inSection:0]; [self.table scrollToRowAtIndexPath:indexPath atScrollPosition:UITableViewScrollPositionBottom animated:NO]; log is a mutable array containing the objects that make up the content of each cell. The above code works fine in viewDidAppear however this has the unfortunate side effect of displaying the top of the table when the view first appears and then jumping to the bottom. I would prefer it if the table view could be scrolled to the bottom before it appears. I tried the scroll in viewWillAppear and viewDidLoad but in both cases the data has not been loaded into the table yet and both throw an exception. Any guidance would be much appreciated, even if it's just a case of telling me what I have is all that is possible.

    Read the article

  • breaking out of for loop when running a function inside a for loop

    - by andrewj
    I'm embarrassed that I'm asking this question, but here I go: Suppose you have the following function foo. When I'm running a for loop, I'd like it to skip the remainder of foo when foo initially returns the value of 0. However, break doesn't work when it's inside a function. As it's currently written, I get an error message, no loop to break from, jumping to top level. Any suggestions? foo <- function(x) { y <- x-2 if (y==0) {break} # how do I tell the for loop to skip this z <- y + 100 z } for (i in 1:3) { print(foo(i)) }

    Read the article

  • breaking out of for loop when running a function inside a for loop in R

    - by andrewj
    I'm embarrassed that I'm asking this question, but here I go: Suppose you have the following function foo. When I'm running a for loop, I'd like it to skip the remainder of foo when foo initially returns the value of 0. However, break doesn't work when it's inside a function. As it's currently written, I get an error message, no loop to break from, jumping to top level. Any suggestions? foo <- function(x) { y <- x-2 if (y==0) {break} # how do I tell the for loop to skip this z <- y + 100 z } for (i in 1:3) { print(foo(i)) }

    Read the article

  • Migrating test cases & defects from Quality Center to TFS 2008/2010

    - by stackoverflowuser
    Tool that can be used to migrate (or even better..synchronize) test cases and bugs between: TFS 2008 and Quality Center 9.2 (or later) TFS 2010 and Quality Center 9.2 (or later) I am aware of the following tools: Test Case Migrator (Excel/MHT) Tool TFS Bug Item Synchronizer 2.2 for Quality Center Also shai raiten mentions on his blog about QC 2 Team System 2010 migration tool that he has been working on and its done. But could not find any link for downloading the tool. http://blogs.microsoft.co.il/blogs/shair/archive/2009/12/31/quality-center-migration-to-team-system-2010-done.aspx Before jumping on coding with TFS SDK and QC components to come up with my own tool I need some inputs from the stackoverflow community.

    Read the article

  • flash dynamic textfield, multiline and wordwrap acting funky.

    - by pfunc
    I have this weird thing happening with flash and a dynamic textfield. Basically, someone rolls over a marker on a map, and a tooltip pops up with a dynamic textfield. The textfield is set to multiline=true and wordwrap = true, and I defined a specific width of 160 pixels. The problem is, some of my text is jumping to the next line, some of it is just getting cut off. So if I have a line like "The Cat Jumped Over the Box", On one line I will see "The Cat Jumped" and on the next line I would see "the Box". It looks like it is masking out the "over" line and not pushing it to the next line. It's not doing this for everything, just some longer lines. This is a really weird bug and I have tried for 8 hours to get this fixed. Has anyone ran into this problem before?

    Read the article

  • How to speed up calculation of length of longest common substring?

    - by eSKay
    I have two very large strings and I am trying to find out their Longest Common Substring. One way is using suffix trees (supposed to have a very good complexity, though a complex implementation), and the another is the dynamic programming method (both are mentioned on the Wikipedia page linked above). Using dynamic programming The problem is that the dynamic programming method has a huge running time (complexity is O(n*m), where n and m are lengths of the two strings). What I want to know (before jumping to implement suffix trees): Is it possible to speed up the algorithm if I only want to know the length of the common substring (and not the common substring itself)?

    Read the article

  • JQuery preventDefault() but still add the fragment path to the URL without navigating to the fragment

    - by jdln
    My question is similar to this one but none of the answers solve my problem: Use JQuery preventDefault(), but still add the path to the URL When a user clicks a fragment link, I need to remove the default behaviour of jumping to the fragment but still add the fragment to the URL. This code (taken from the link) will fire the animation, and then add the fragment to the URL. However the fragment is then navigated to, which im my case breaks my site. $("#login_link").click(function (e) { e.preventDefault(); $("#login").animate({ 'margin-top': 0 }, 600, 'linear', function(){ window.location.hash = $(this).attr('href'); }); });

    Read the article

  • Building webparts with Visual Studio 2010 Express

    - by MalphasWats
    Hi, I'm trying to get started with building my own webparts, planning to follow this MSDN article. I've downloaded Visual C# 2010 Express - I'm not quite at the point where I feel comfortable dropping 1000 big ones yet, and I installed Visual Web Developer 2010 Express via the WPInstaller. Following through the tutorial, aside from the fact that I don't get the option to create a "Web Control Library", a gap I filled with this article, I can't seem to find the sn.exe tool (or the "Visual Studio 2005 Command Prompt"!). I know it's not quite a direct programming related question, but I can't even get the thing going yet! Any help is appreciated. Thanks EDIT:- I think I may be jumping the gun quite considerably, I wrote a simple hello world example and tried to build it but it doesn't have any references to the Microsoft.SharePoint packages and they don't appear in my lists. Am I understanding some more research I've done (namely this) correctly, in that I have to actually have a full installation of actual SharePoint on the machine I'm developing on?

    Read the article

  • Troubleshooting source of heavy resource-usage on a server 2008 running multiple sites

    - by batman_man
    Hi, I am running about 10 asp.net websites on a hosted virtual server. The server runs Server 2008 - each website is backed by its own database running on SQL server 2008 on the same box. Lately the box has seemed really slow. The only kind of discovery i could think of doing was looking in the task manager, where i can see w3wp and sqlserver.exe jumping to 40% cpu usage every 5-10 seconds. What are the steps i can take to determine which of my websites is taking these resources and or what database is getting hit the most? I have of course ssms installed on the machine as well. As you can tell, my sysadmin skills are very very limited - any help would be much appreciated.

    Read the article

  • Open Source Utilization Questions: How do you lone wold programmers best take advantage of open sour

    - by Funkyeah
    For Clarity: So you come up with an idea for a new program and want to start hacking, but you also happen to be a one-man army. How do you programming dynamos best find and utilize existing open-source software to give you the highest jumping off point possible when diving into your new project? When you do jump in where the shit do you start from? Any imaginary scenarios would be welcome, e.g. a shitty example might be utilizing a open-source database with an open-source IM client as a starting off point to a make a new client where you could tag and store conversations and query those tags at a later time.

    Read the article

  • Tableview lags, and glitches while scrolling due to images loading in each cell

    - by Luis Tovar
    Hey Guys! Me again! Can someone provide me with some example code of how to properly load images in a tablecell without making the tableview glitch while scrolling. For example if you look at fandago's app, when scrolling through their movies you can see that the image is loaded asynchronously so that the scroll isnt jumping, lagging, or glitchy. Thanks in advance. Right now I have the images loading just fine but it is glitchy as i scroll because the images are loading on main thread. I am creating an app very similar to fandango. PS I am downloading these images via xml. (you know what i mean) Thanks!!

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >