Search Results

Search found 23182 results on 928 pages for 'worst case'.

Page 410/928 | < Previous Page | 406 407 408 409 410 411 412 413 414 415 416 417  | Next Page >

  • Automatic LaTex document generation from Excel spreadsheet

    - by Bowler
    I have some data in an excel file from which I have to generate a report. I repeat this task fairly regularly and am looking to automate it. I have a LaTeX project into which I usually just copy data by hand, export the necessary worksheets as pdfs and add them to my LaTeX project and compile with pdflatex. It has occured to me that there must be a way to automate this process. Is there an efficient way to export the data from excel and into a LaTeX project, possibly a vba script in excel could run the process? Also, it doesn't have to be LaTeX, I'm not all that experienced with MS office's more advanced features is there some way akin to a mail merge that I could achieve this with? In some ways this might be better in case I have to pass the work on to someone who doesn't know LaTeX. Thanks.

    Read the article

  • DDD and Value Objects. Are mutable Value Objects a good candidate for Non Aggr. Root Entity?

    - by Tony
    Here is a little problem Have an entity, with a value object. Not a problem. I replace a value object for a new one, then nhibernate inserts the new value and orphan the old one, then deletes it. Ok, that's a problem. Insured is my entity in my domain. He has a collection of Addresses (value objects). One of the addresses is the MailingAddress. When we want to update the mailing address, let's say zipcode was wrong, following Mr. Evans doctrine, we must replace the old object for a new one since it's immutable (a value object right?). But we don't want to delete the row thou, because that address's PK is a FK in a MailingHistory table. So, following Mr. Evans doctrine, we are pretty much screwed here. Unless i make my addressses Entities, so i don't have to "replace" it, and simply update its zipcode member, like the old good days. What would you suggest me in this case? The way i see it, ValueObjects are only useful when you want to encapsulate a group of database table's columns (component in nhibernate). Everything that has a persistence id in the database, is better off to make it an Entity (not necessarily an aggregate root) so you can update its members without recreating the whole object graph, specially if that's a deep-nested object. Do you concur? Is it allowed by Mr. Evans to have a mutable value object? Or is a mutable value object a candidate for an Entity? Thanks

    Read the article

  • Fuzzing for Security

    - by Sylvain Duloutre
    Yesterday, I attended an internal workshop about ethical hacking. Hacking skills like fuzzing can be used to quantitatively assess and measure security threats in software.  Fuzzing is a software testing technique used to discover coding errors and security loopholes in software, operating systems or networks by injecting massive amounts of random data, called fuzz, to the system in an attempt to make it crash. If the program contains a vulnerability that can leads to an exception, crash or server error (in the case of web apps), it can be determined that a vulnerability has been discovered.A fuzzer is a program that generates and injects random (and in general faulty) input to an application. Its main purpose is to make things easier and automated.There are typically two methods for producing fuzz data that is sent to a target, Generation or Mutation. Generational fuzzers are capable of building the data being sent based on a data model provided by the fuzzer creator. Sometimes this is simple and dumb as sending random bytes, swapping bytes or much smarter by knowing good values and combining them in interesting ways.Mutation on the other hand starts out with a known good "template" which is then modified. However, nothing that is not present in the "template" or "seed" will be produced.Generally fuzzers are good at finding buffer overflow, DoS, SQL Injection, Format String bugs etc. They do a poor job at finding vulnerabilites related to information disclosure, encryption flaws and any other vulnerability that does not cause the program to crash.  Fuzzing is simple and offers a high benefit-to-cost ratio but does not replace other proven testing techniques.What is your computer doing over the week-end ?

    Read the article

  • Solving Big Problems with Oracle R Enterprise, Part I

    - by dbayard
    Abstract: This blog post will show how we used Oracle R Enterprise to tackle a customer’s big calculation problem across a big data set. Overview: Databases are great for managing large amounts of data in a central place with rigorous enterprise-level controls.  R is great for doing advanced computations.  Sometimes you need to do advanced computations on large amounts of data, subject to rigorous enterprise-level concerns.  This blog post shows how Oracle R Enterprise enables R plus the Oracle Database enabled us to do some pretty sophisticated calculations across 1 million accounts (each with many detailed records) in minutes. The problem: A financial services customer of mine has a need to calculate the historical internal rate of return (IRR) for its customers’ portfolios.  This information is needed for customer statements and the online web application.  In the past, they had solved this with a home-grown application that pulled trade and account data out of their data warehouse and ran the calculations.  But this home-grown application was not able to do this fast enough, plus it was a challenge for them to write and maintain the code that did the IRR calculation. IRR – a problem that R is good at solving: Internal Rate of Return is an interesting calculation in that in most real-world scenarios it is impractical to calculate exactly.  Rather, IRR is a calculation where approximation techniques need to be used.  In this blog post, we will discuss calculating the “money weighted rate of return” but in the actual customer proof of concept we used R to calculate both money weighted rate of returns and time weighted rate of returns.  You can learn more about the money weighted rate of returns here: http://www.wikinvest.com/wiki/Money-weighted_return First Steps- Calculating IRR in R We will start with calculating the IRR in standalone/desktop R.  In our second post, we will show how to take this desktop R function, deploy it to an Oracle Database, and make it work at real-world scale.  The first step we did was to get some sample data.  For a historical IRR calculation, you have a balances and cash flows.  In our case, the customer provided us with several accounts worth of sample data in Microsoft Excel.      The above figure shows part of the spreadsheet of sample data.  The data provides balances and cash flows for a sample account (BMV=beginning market value. FLOW=cash flow in/out of account. EMV=ending market value). Once we had the sample spreadsheet, the next step we did was to read the Excel data into R.  This is something that R does well.  R offers multiple ways to work with spreadsheet data.  For instance, one could save the spreadsheet as a .csv file.  In our case, the customer provided a spreadsheet file containing multiple sheets where each sheet provided data for a different sample account.  To handle this easily, we took advantage of the RODBC package which allowed us to read the Excel data sheet-by-sheet without having to create individual .csv files.  We wrote ourselves a little helper function called getsheet() around the RODBC package.  Then we loaded all of the sample accounts into a data.frame called SimpleMWRRData. Writing the IRR function At this point, it was time to write the money weighted rate of return (MWRR) function itself.  The definition of MWRR is easily found on the internet or if you are old school you can look in an investment performance text book.  In the customer proof, we based our calculations off the ones defined in the The Handbook of Investment Performance: A User’s Guide by David Spaulding since this is the reference book used by the customer.  (One of the nice things we found during the course of this proof-of-concept is that by using R to write our IRR functions we could easily incorporate the specific variations and business rules of the customer into the calculation.) The key thing with calculating IRR is the need to solve a complex equation with a numerical approximation technique.  For IRR, you need to find the value of the rate of return (r) that sets the Net Present Value of all the flows in and out of the account to zero.  With R, we solve this by defining our NPV function: where bmv is the beginning market value, cf is a vector of cash flows, t is a vector of time (relative to the beginning), emv is the ending market value, and tend is the ending time. Since solving for r is a one-dimensional optimization problem, we decided to take advantage of R’s optimize method (http://stat.ethz.ch/R-manual/R-patched/library/stats/html/optimize.html). The optimize method can be used to find a minimum or maximum; to find the value of r where our npv function is closest to zero, we wrapped our npv function inside the abs function and asked optimize to find the minimum.  Here is an example of using optimize: where low and high are scalars that indicate the range to search for an answer.   To test this out, we need to set values for bmv, cf, t, emv, tend, low, and high.  We will set low and high to some reasonable defaults. For example, this account had a negative 2.2% money weighted rate of return. Enhancing and Packaging the IRR function With numerical approximation methods like optimize, sometimes you will not be able to find an answer with your initial set of inputs.  To account for this, our approach was to first try to find an answer for r within a narrow range, then if we did not find an answer, try calling optimize() again with a broader range.  See the R help page on optimize()  for more details about the search range and its algorithm. At this point, we can now write a simplified version of our MWRR function.  (Our real-world version is  more sophisticated in that it calculates rate of returns for 5 different time periods [since inception, last quarter, year-to-date, last year, year before last year] in a single invocation.  In our actual customer proof, we also defined time-weighted rate of return calculations.  The beauty of R is that it was very easy to add these enhancements and additional calculations to our IRR package.)To simplify code deployment, we then created a new package of our IRR functions and sample data.  For this blog post, we only need to include our SimpleMWRR function and our SimpleMWRRData sample data.  We created the shell of the package by calling: To turn this package skeleton into something usable, at a minimum you need to edit the SimpleMWRR.Rd and SimpleMWRRData.Rd files in the \man subdirectory.  In those files, you need to at least provide a value for the “title” section. Once that is done, you can change directory to the IRR directory and type at the command-line: The myIRR package for this blog post (which has both SimpleMWRR source and SimpleMWRRData sample data) is downloadable from here: myIRR package Testing the myIRR package Here is an example of testing our IRR function once it was converted to an installable package: Calculating IRR for All the Accounts So far, we have shown how to calculate IRR for a single account.  The real-world issue is how do you calculate IRR for all of the accounts?This is the kind of situation where we can leverage the “Split-Apply-Combine” approach (see http://www.cscs.umich.edu/~crshalizi/weblog/815.html).  Given that our sample data can fit in memory, one easy approach is to use R’s “by” function.  (Other approaches to Split-Apply-Combine such as plyr can also be used.  See http://4dpiecharts.com/2011/12/16/a-quick-primer-on-split-apply-combine-problems/). Here is an example showing the use of “by” to calculate the money weighted rate of return for each account in our sample data set.  Recap and Next Steps At this point, you’ve seen the power of R being used to calculate IRR.  There were several good things: R could easily work with the spreadsheets of sample data we were given R’s optimize() function provided a nice way to solve for IRR- it was both fast and allowed us to avoid having to code our own iterative approximation algorithm R was a convenient language to express the customer-specific variations, business-rules, and exceptions that often occur in real-world calculations- these could be easily added to our IRR functions The Split-Apply-Combine technique can be used to perform calculations of IRR for multiple accounts at once. However, there are several challenges yet to be conquered at this point in our story: The actual data that needs to be used lives in a database, not in a spreadsheet The actual data is much, much bigger- too big to fit into the normal R memory space and too big to want to move across the network The overall process needs to run fast- much faster than a single processor The actual data needs to be kept secured- another reason to not want to move it from the database and across the network And the process of calculating the IRR needs to be integrated together with other database ETL activities, so that IRR’s can be calculated as part of the data warehouse refresh processes In our next blog post in this series, we will show you how Oracle R Enterprise solved these challenges.

    Read the article

  • "Sent on behalf" not appearing when delegates sending mails

    - by New Steve
    Ringo is a delegate of Paul's mailbox in Exchange, but when Ringo sends mail from Paul's mailbox, the recipient sees "Paul" in the sender field, rather than "Paul Sent On Behalf Of Ringo" Paul has set "Editor" permissions for Ringo to his mailbox, and Ringo has been granted "Send on behalf of" permissions in Exchange. Ringo did at one time have "Send As" permissions for Paul's mailbox in Exchange, but this has since been removed. This is also the case for all other delegates to Paul's mailbox. How do I make it so that emails sent by Paul's delegates show the "Sent On Behalf Of" information in the Sender field? Using Exchange Server 2007 and Microsoft Office Outlook 2007

    Read the article

  • OS X will not register newly installed network adapters

    - by Chris
    I have purchased an Edimax 7318USg and tested it on a Windows machine (works). The installation process for the software for this adapter runs smoothly. However, OS X simply does not recognize new network adapters. When you go to System Preferences/Network, a new network adapter should be present/there should be an alert. This is not the case. Why might this be? Is there a setting I may reset to force the operating system to recognize this? Thanks!

    Read the article

  • juicy couture sale would nanytime go for any added casting

    - by user109129
    Depaccident aloft the achievement and the air-conditionedior of getting, you can admissionment the acclimateed acadding ambiencel as per your acquiesceadeptness.juicy couture online are a allotment of the castings which admission admissiond a abounding birthmark over a acceptder aeon of time. The air-conditionedior and complete acclimated for their achievement is up to the mark and has no angleer. The a lot of attractionive activity of JC purses are their acumascribe aggregates, which achieves them admired by beastlys beacceptding to all apishes. These are simple to be admissionmentd from onbandage retail aliment. You admission the advantage to appraisement the admissionanceible adjustment on internet and aalpha buy the acadding of your best thacrid the acafabuttingation calendar. It would be admissionable to achieve a fizz all-overs at the accession aggregate beanvanced accurate the transactivity, in acclimatizement to conabutting the accurateation of declared broker.Alacquireing, tachievement are so aapprenticeding advantages in the exchange that it has in fact beappear difficult to achieve the acclimateed best, but if you alattainable admissionment juicy couture sale, you would nanytime go for any added casting. Their beside attaccident with adjustment of blooms is the best activitys which a exhausted woman may annual for. The bandage of these tots is in fact able, and they do not aperture out even afterwardss acrid and added acquireing.acclimateed admeaconstantments of Juicy couture battleaccoutrements achieve them applicative to accouterment arbitrary admeaconstantment requirements. Some woman like huge accoutrements while addeds like acclimatized admeaconstantmentd tots. In any case, the cobasicotmentments of these purses are bottomless and admission the acclimateation to acclimatize sanytimeal sbasic activitys. The admirable and dejected bloom abuttals is anadded complete aspect of JC getting. acquire the acclimateed ones for you accordanceing to the accumulating of your accoutermentss, and leave a ambrosial afterwardseffect on addeds. abonfire and acclimateed contrasts are in trend nowacanicule, so accrue yourself up to date by purchasing this air-conditionedb accumulating from Juicy Couture.

    Read the article

  • How to consolidate servers with the not-very-strong infrastructure

    - by Sim
    All, Situation We are in retail industry with about 10 distributors and use Solomon as the standard ERP for all our systems Each distributor has 1 HQ and 5 - 10 branches, each branch has their own server (Windows 2000/XP/2003 + Solomon + another built-in POS system) Everyday, branches has to extract data and send (via email/Skype) to HQ for data consolidation purpose When we first deployed our ERP, the infrastructure (e.g. Internet connection) wasn't reliable enough. That's why we went with the de-centralized model (each branch got their own server) Now, the infrastructure is mature already. And we need to consolidate data more quickly (not from branches -- HQ -- our company but something like HQ -- our company only) Goal We just have Solomon servers in distributor HQ. All the transactions in branches (retrieved from POS) will by synchronized with HQ server directly) There is a backup plan just in case the Internet goes down, or HQ server goes down Question With the above question, could you guys suggests some model for me ? Should we use Terminal services, any other solutions ? Any watchout/suggestions ? Any good article to read 'bout this ? Thanks a lot

    Read the article

  • How to keep your third party libraries up to date?

    - by Joonas Pulakka
    Let's say that I have a project that depends on 10 libraries, and within my project's trunk I'm free to use any versions of those libraries. So I start with the most recent versions. Then, each of those libraries gets an update once a month (on average). Now, keeping my trunk completely up to date would require updating a library reference every three days. This is obviously too much. Even though usually version 1.2.3 is a drop-in replacement for version 1.2.2, you never know without testing. Unit tests aren't enough; if it's a DB / file engine, you have to ensure that it works properly with files that were created with older versions, and maybe vice versa. If it has something to do with GUI, you have to visually inspect everything. And so on. How do you handle this? Some possible approaches: If it ain't broke, don't fix it. Stay with your current version of the library as long as you don't notice anything wrong with it when used in your application, no matter how often the library vendor publishes updates. Small incremental changes are just waste. Update frequently in order to keep change small. Since you'll have to update some day in any case, it's better to update often so that you notice any problems early when they're easy to fix, instead of jumping over several versions and letting potential problems to accumulate. Something in between. Is there a sweet spot?

    Read the article

  • How to "translate" interdependent object states in code?

    - by Earl Grey
    I have the following problem. My UI interace contains several buttons, labels, and other visual information. I am able to describe every possible workflow scenario that should be be allowed on that UI. That means I can describe it like this - when button A is pressed, the following should follow - In the case of that A button, there are three independent factors that influence the possible result when pushing the A button. The state of the session (blank, single, multi, multi special), the actual work that is being done by the system at the moment of pressing the A button (nothing was happening, work was being done, work was paused) and a separate UI element that has two states (on , off)..This gives me a 3 dimensional cube with 24 possible outcomes. I could write code for this using if cycles, switch cycles etc....but the problem is, I have another 7 buttons on that ui, I can enter this UI from different states..some buttons change the state, some change parameters... To sum up, the combinations are mindbogling and I am not able come up with a methodology that scales and is systematically reliable. I am able to describe EVERY workflow with words, I am sure my description is complete and without logical errors. But I am not able to translate that into code. I was trying to draw flowcharts but it soon became visually too complicated due to too many if "emafors". Can you advice how to proceeed?

    Read the article

  • Which approach would lead to an API that is easier to use?

    - by Clem
    I'm writing a JavaScript API and for a particular case, I'm wondering which approach is the sexiest. Let's take an example: writing a VideoPlayer, I add a getCurrentTime method which gives the elapsed time since the start. The first approach simply declares getCurrentTime as follows: getCurrentTime():number where number is the native number type. This approach includes a CURRENT_TIME_CHANGED event so that API users can add callbacks to be aware of time changes. Listening to this event would look like the following: myVideoPlayer.addEventListener(CURRENT_TIME_CHANGED, function(evt){ console.log ("current time = "+evt.getDispatcher().getCurrentTime()); }); The second approach declares getCurrentTime differently: getCurrentTime():CustomNumber where CustomNumber is a custom number object, not the native one. This custom object dispatches a VALUE_CHANGED event when its value changes, so there is no need for the CURRENT_TIME_CHANGED event! Just listen to the returned object for value changes! Listening to this event would look like the following: myVideoPlayer.getCurrentTime().addEventListener(VALUE_CHANGED, function(evt){ console.log ("current time = "+evt.getDispatcher().valueOf()); }); Note that CustomNumber has a valueOf method which returns a native number that lets the returned CustomNumber object being used as a number, so: var result = myVideoPlayer.getCurrentTime()+5; will work! So in the first approach, we listen to an object for a change in its property's value. In the second one we directly listen to the property for a change on its value. There are multiple pros and cons for each approach, I just want to know which one the developers would prefer to use!

    Read the article

  • SYN flooding still a threat to servers?

    - by Rob
    Well recently I've been reading about different Denial of Service methods. One method that kind of stuck out was SYN flooding. I'm a member of some not-so-nice forums, and someone was selling a python script that would DoS a server using SYN packets with a spoofed IP address. However, if you sent a SYN packet to a server, with a spoofed IP address, the target server would return the SYN/ACK packet to the host that was spoofed. In which case, wouldn't the spoofed host return an RST packet, thus negating the 75 second long-wait, and ultimately failing in its attempt to DoS the server?

    Read the article

  • Cron: job starts but doesn't complete

    - by Guandalino
    I have a problem with a cron job which starts but doesn't complete. Running the command manually works fine. I already read the page about cron issues and solutions here on AskUbuntu, tried the proposed solutions but didn't find an answer working in my case. I'm using Ubuntu 12.04. $ crontab -e SHELL=/bin/bash # otherwise it would be /bin/sh 59 16 * * * /bin/duply calendar backup > /tmp/duply.log Btw, the cron file ends with an empty line, as someone pointed out. Once the job has "finished"...: $ cat /tmp/duply.log Start duply v1.5.7, time is 2012-06-22 16:59:01. Instead, running manually the script it works correctly and gives this output: Start duply v1.5.7, time is 2012-06-22 17:06:39. [cut] ... here is a long output generated by duply. ... and yes, files have been backed up. [cut] --- Finished state OK at 17:06:42.581 - Runtime 00:00:03.170 --- I also tried to restart the cron daemon (sudo service cron restart) but nothing changed. Do you have any suggestion to fix the issue?

    Read the article

  • Running a defect php file cause error 500

    - by John Brunner
    When I address a PHP file, I always get an error 500. I've looked up the logs of my Apache server, and this displays that some includes etc. in the PHP file address files which don't exist on the server. They don't exist because I'm just testing my PHP file. But could it be achieved that the server runs the php file in every case, even when something is wrong? Every 30 seconds an entry is made in the error_log file which says [Sat Jun 09 17:55:07 2012] [error] [client 10.224.55.160] File does not exist: /var/www/html/index.html ... but there IS an index.html?!

    Read the article

  • Fault tolerance with a pair of tightly coupled services

    - by cogitor
    I have two tightly coupled services that can run on completely different nodes (e.g. ServiceA and ServiceB). If I start up another replicated copy of both these services for backup purposes (ServiceA-2 and ServiceB-2), what would be the best way of setting up a fault tolerant distributed system such that on a fault in any of the tightly coupled services ServiceA or ServiceB the whole communication should go through backup ServiceA-2 and ServiceB-2? Overall, all the communication should go either through both services or their backup replicas. |---- Service A | | Service B | | (backup branch - used only on fault in Service A or B) ---- Service A-2 | Service B-2 Note that in case that Service A goes down, data from Service B would be incorrect (and vice versa). Load balancing between the primary and backup branch is also not feasible.

    Read the article

  • Web-based (intranet / non-hosted) timesheet / project tracking tools

    - by warren
    I realize some similar questions have been asked along these lines before, but from reading-through them today, it appears they don't match my use case. I am looking for a web-based, non-hosted time and project tracking tool. I've downloaded Collabtive so far, but am looking for other suggestions, too. My list of requirements: runs on standard LAMP stack non-hosted (ie, there is an option to download and run it on a local server) not a desktop/single-user application easy-to-use - my audience is a mix of technical and non-technical folks easy to maintain - when time for upgrading comes, I'd really like to not have to rebuild the app (a la ./configure ; make ; make install) needs to support multiple users free-form project additions: we don't have a central project management authority (users should be able to add whatever they're working on, not merely from a drop-down) Does anyone here have experience with such tools? It doesn't have to be free.. but free is always nice :)

    Read the article

  • 2D Collision masks for handling slopes

    - by JiminyCricket
    I've been looking at the example at: http://create.msdn.com/en-US/education/catalog/tutorial/collision_2d_perpixel and am trying to figure out how to adjust the sprite once a collision has been detected. As David suggested at XNA 4.0 2D sidescroller variable terrain heightmap for walking/collision, I made a few sensor points (feet, sides, bottom center, etc.) and can easily detect when these points actually collide with non-transparent portions of a second texture (simple slope). I'm having trouble with the algorithm of how I would actually adjust the sprite position based on a collision. Say I detect a collision with the slope at the sprite's right foot. How can I scan the slope texture data to find the Y position to place the sprite's foot so it is no longer inside the slope? The way it is stored as a 1D array in the example is a bit confusing, should I try to store the data as a 2D array instead? For test purposes, I'm thinking of just using the slope texture alpha itself as a primitive and easy collision mask (no grass bits or anything besides a simple non-linear slope). Then, as in the example, I find the coordinates of any collisions between the slope texture and the sprite's sensors and mark these special sensor collisions as having occurred. Finally, in the case of moving up a slope, I would scan for the first transparent pixel above (in the texture's Ys at that X) the right foot collision point and set that as the new height of the sprite. I'm a little unclear also on when I should make these adjustments. Collisions are checked on every game.update() so would I quickly change the position of the sprite before the next update is called? I also noticed several people mention that it's best to separate collision checks horizontally and vertically, why is that exactly? Open to any suggestions if this is an inefficient or inaccurate way of handling this. I wish MSDN had provided an example of something like this, I didn't know it would be so much more complex than NES Mario style pure box platforming!

    Read the article

  • IT Optimization Plan Pays Off For UK Retailer

    - by [email protected]
    I caught this article in ComputerworldUK yesterday. The headline talks about UK-based supermarket chain Morrisons is increasing their IT spend...OK, sounds good. Even nicer that Oracle is a big part of that. But what caught my eye were three things: 1) Morrison's truly has a long term strategy for IT. In this case, modernizing and optimizing how they use IT for business advantage. 2) Even in a tough economic climate, Morrison's views IT investments as contributing to and improving the bottom line. Specifically, "The investment in IT contributed to a 21 percent increase in Morrison's underlying profit.." 3) The phased, 3-year "Optimization Plan" took a holistic approach to their business--from CRM and Supply Chain systems to the underlying application infrastructure. On the infrastructure front, adopting a more flexible Service-Oriented Architecture enabled them to be more agile and adapt their business and Identity Management helped with sometimes mundane (but costly) issues like lost passwords and being able to document who has access to what. Things don't always turn out so rosy. And I know it was a long and difficult process...but it's nice to see a happy ending every once in a while.

    Read the article

  • Ubuntu, control the init startup

    - by Xolve
    Ubuntu uses upstart instead of sysvinit. However there are still runlevels and the links in them. I have installed tor and it has added itself to the startup of the OS. Now I want to remove it and the popular options are to remove the links of starting and stopping the service from runlevels or make the /etc/init.d/ script non-executable. This is fine but this will be problematic in case I want to put tor again on the startup list. How would I know to put the proper sequences in the proper runlevel directories. Is there any complete guide given? What are the rules for this? Any tools to manage the init? Please tell

    Read the article

  • Interfaces and Virtuals Everywhere????

    - by David V. Corbin
    First a disclaimer; this post is about micro-optimization of C# programs and does not apply to most common scenarios - but when it does, it is important to know. Many developers are in the habit of declaring member virtual to allow for future expansion or using interface based designs1. Few of these developers think about what the runtime performance impact of this decision is. A simple test will show that this decision can have a serious impact. For our purposes, we used a simple loop to time the execution of 1 billion calls to both non-virtual and virtual implementations of a method that took no parameters and had a void return type: Direct Call:     1.5uS Virtual Call:   13.0uS The overhead of the call increased by nearly an order of magnitude! Once again, it is important to realize that if the method does anything of significance then this ratio drops quite quickly. If the method does just 1mS of work, then the differential only accounts for a 1% decrease in performance. Additionally the method in question must be called thousands of times in order to produce a meaqsurable impact at the application level. Yet let us consider a situation such as the per-pixel processing of a graphics processing application. Here we may have a method which is called millions of times and even the slightest increase in overhead can have significant ramification. In this case using either explicit virtuals or interface based constructs is likely to be a mistake. In conclusion, good design principles should always be the driving force behind descisions such as these; but remember that these decisions do not come for free.   1) When a concrete class member implements an interface it does not need to be explicitly marked as virtual (unless, of course, it is to be overriden in a derived concerete class). Nevertheless, when accessed via the interface it behaves exactly as if it had been marked as virtual.

    Read the article

  • Handling early/late/dropped packets for interpolation in a 3D multiplayer game

    - by Ben Cracknell
    I'm working on a multiplayer game that for the purposes of this question, is most similar to Team Fortress. Each network data packet will contain the 3D position of the target moving object. (this object could be another player) The packets are sent on a fixed interval, and linear interpolation will be used to smooth the transition between packets. Under normal circumstances, interpolation will occur between the second-to-last packet, and the last packet received. The linear interpolation algorithm is the same as this post: Interpolating positions in a multiplayer game I have the same issue as in that post, but the answers don't seem like they will work in my situation. Consider the following scenario: Normal packet timing, everything is okay The next expected packet is late. That's okay, we'll just extrapolate based on previous positions The late packet eventually arrives with corrections to our extrapolation. Now what do we do with its information? The answers on the above post suggest we should just interpolate to this new packet's position, but that would not work at all. If we have already extrapolated past that point in time, moving back would cause rubber-banding. The issue is similar in the case of an early or dropped packet. So I believe what I am looking for is some way to smoothly deal with new information in an ongoing interpolation/extrapolation process. Since I might be moving on to quadratic or even cubic interpolation, it would be great if the same solutiuon could be applied to those as well.

    Read the article

  • Keyboard freezes / stuck if a key pressed repeatedly

    - by Aziz Rahmad
    I use Acer 4530. This problem has happened long since Ubuntu 10.10 and now that I use 11.04 dual booted with Linux Mint 10. Everytime I press one key repeatedly, like when I read a long article in a website/ebook or when I play games which required me to press arrow keys repeatedly, it would randomly freeze. That is, whatever I press on keyboard has no effect, and that also happens with touchpad. However, the USB mouse works just fine. I later found out that it's not actually freeze but more like it's like the key stuck. For example when I play tetris which I usually press w (down) button repeatedly, after some times it would freeze. And if I put the cursor in, say, browser's address bar, it would type "wwww....." infinitely. The only way I could fix it is by suspend the laptop, either by using mouse or by closing the lid. And instead of suspended, in that case the laptop would automatically wake up and everything will be fine. (Usually my laptop would wake up after suspended by pressing any key) It has happened since the first time I use Ubuntu, 10.10, and it also happens in Linux Mint 10, and until now in Ubuntu 11.04. It never happened when I use Windows, though. Anyone has ever encounter similar problem? Anyone know how to fix it permanently? UPDATE I just recently installed Windows 7 and Windows 8 Development Preview and both have similar symptoms. So I declare that this problem is not OS specific, probably hardware problem.

    Read the article

  • What is a good support knowledge base tool?

    - by Guillaume
    I have been searching for a tool to help my team organize its knowledge for resolving recurring support cases. I know this question will probably be closed, but I'll try my change anyway because I know that I can have some good answers about that. Context: our team is developing and supporting an huge applications (lots of different screens and workflow processes. We already have a good tool for managing our documentation, but we are struggling with support cases. Support action involve often quite a lot of manual steps to fix stuff and the knowledge for these actions is more 'oral transmission' than modern tools. We need an efficient way to store them in a knowledge base to be able to retrieve similar cases based on patterns (a stacktrace, an error message, a component name, a workflow step, ...) and ranked by similarity. Our wiki search is not very powerful when it come to this kind of search and the team members don't want to 'waste' time writing a report that will never be found... Do you know efficient knowledge base tool for this kind of use case ?

    Read the article

  • POP Forums will be at Mix!

    - by Jeff
    If you've never been to Mix, you're missing out on what is arguably one of the best conferences that Microsoft does. I'm not just saying that because I work here... I felt that way before, having been to most of them. The breadth of people and disciplines make it a really exciting event that pushes it well beyond the "Redmond bubble," as I like to call it. You should go.In any case, there's an Open Source Fest happening the night before Mix starts, on Monday, from 6 to 9 p.m. There will be people there representing a ton of great projects, some as enormous as Umbraco, as well as people doing SDK's, controls and other neat stuff. Best of all, you get to vote for your favorites. Unless your favorite is Orchard, because Microsoft is sponsoring that directly. Or if it's POP Forums, not because Microsoft is sponsoring it, but because that's where I work in my day job. No prizes for me! Come by and say hello. I think the app will be nearly final by then, and it's already running on MouseZoom, one of my little side projects.The quality and diversity of open source projects around the Microsoft stack just keeps getting better. Our platform is also pretty great at running stuff we don't make. This will be a pretty exciting Mix. Can't wait!

    Read the article

  • Wifi not working on Acer Aspire One D270

    - by Dani
    brand new baby linux user here, never used Ubuntu or any other linux OS before, so be gentle and use short words! I installed Ubuntu 12.04 on my new Acer Aspire One D270-F61C/KF netbook (it's a Japanese computer which had Japanese windows preinstalled, and I decided to take the plunge and try Ubuntu because English Windows costs the earth and stars). Wifi isn't working; I enter my wireless password, it tries to connect for a while, then asks for my password again. And KEEPS ASKING, every few minutes. Wired connection works fine. Wireless card is a Broadcom BCM4313; I have the "additional drivers" checked and installed (I tried unchecking and then reinstalling them in case that would help, no joy, and now my home wifi connection isn't showing up in the list of available connections, argh). I've done a lot of googling and I gather there's a lot of issues with Broadcom cards, but some of the answers are for earlier ubuntu builds and many of them are a bit confusing for a new user. I gather I need to try installing some new drivers other than the proprietary ones provided, but I'm having trouble figuring out how that's done. Anyone got some simple, step by step instructions for me? Please bear in mind, TOTAL N00B. (EDIT): OKAY, got it fixed finally; after suggestions on the Ubuntu forums and messing around with drivers, what finally worked was installing Wicd. Not... using Wicd, for some reason, just installing it fixed it. ...I CHOOSE NOT TO QUESTION IT.

    Read the article

< Previous Page | 406 407 408 409 410 411 412 413 414 415 416 417  | Next Page >