Search Results

Search found 11246 results on 450 pages for 'power supply unit'.

Page 117/450 | < Previous Page | 113 114 115 116 117 118 119 120 121 122 123 124  | Next Page >

  • Excel 2010 - more than 1 calculation within an IF() statement

    - by Da Bajan
    I have a situation where I need to calculate shipping values based on the length of the supply chain. Easy, however I need to have instances where an increased amount is required based on specific date criteria. My example is as follows: Shipvalue = 100 Date1 = 1/1/2013 (Jan) - ship 50% more than usual Date2 = 2/1/2013 (Feb) - ship 25% more than usual Date3 = 3/1/2013 (Mar) - ship 25% more than usual Supply chain length is: June - October 100 days November - March 140 days April - June 100 days The issue I have is that as there is an increase in the number of days, my formula: IF( Date1-(Supply chain length + any extra days)=today's date, shipvalue+(shipvalue X 50%), IF( Date2-(Supply chain length + any extra days)=today's date, shipvalue+(shipvalue x 50%) IF( Date2-(Supply chain length + any extra days)=today's date, shipvalue+(shipvalue x 50%), IF( preceding cell<>0,shipvalue, 0) ) ) ) Now the problem with this is that if the length of the supply chain increases then the formula misses all but the 1st increase. So, I thought of adding a variable that would be incremented and checked every time you made an increased shipping amount. So, how do I do both the calculation for the increased shipping value, and set the variable in one part of the IF statement?

    Read the article

  • Would SSD drives benefit from a non-default allocation unit size?

    - by davebug
    The default allocation unit size recommended when formatting a drive in our current set-up is 4096 bytes. I understand the basics of the pros and cons of larger and smaller sizes (performance boost vs. space preservation) but it seems the benefits of a solid state drive (seek times massively lower than hard disks) may create a situation where a much smaller allocation size is not detrimental. Were this the case it would at least partially help to overcome the disadvantage of SSD (massively higher prices per GB). Is there a way to determine the 'cost' of smaller allocation sizes specifically related to seek times? Or are there any studies or articles recommending a change from the default based on this newer tech? (Assume the most average scattering of sizes program files, OS files, data, mp3s, text files, etc.)

    Read the article

  • XSL - How to tell if element is last in series

    - by Chris
    I have an XSL template that is called (below). What I would like to do is be able to tell if I am the last Unit being called. <xsl:template match="Unit[@DeviceType = 'Node']"> <!-- Am I the last Unit in this section of xml? --> <div class="unitchild"> Node: #<xsl:value-of select="@id"/> </div> </xsl:template> Example XML <Unit DeviceType="QueueMonitor" Master="1" Status="alive" id="7"> <arbitarytags /> <Unit DeviceType="Node" Master="0" Status="alive" id="8"/> <Unit DeviceType="Node" Master="0" Status="alive" id="88"/> </Unit>

    Read the article

  • My EEEPc 900HA won't turn on/boot. Can it be fixed or does it need to be sent in?

    - by th3dude19
    My EEEPc 900HA stopped booting up out of nowhere. It went to sleep and when I went to wake it up, it wouldn't wake up. I powered down and went to power back up, but no go. The power light comes on solid and the battery/corded power light blinks consistently. There is no HDD activity or light activity. No BIOS. Just the lights described. I've troubleshooted the RAM, HDD, and internal power connections and everything checked out both on battery power and corded power. I've also reset the CMOS to no avail. What else can I do?

    Read the article

  • How do I supply extra info to IApplicationSettingsProvider class?

    - by joebeazelman
    Perhaps this question has been asked before in a different way, but I haven’t been able to find it. I have one or more plugin adapter assemblies in my application all having the type IPlugin, for instance. Each adapter has its own settings structures stored in a common directory. Whether they are stored in one contiguous file or in separate ones doesn’t matter. Each adapter can have one or more settings associated with it. The settings will have both a name and the Plugin it will be used for. How would I create such a configuration system using the following requirements: I want to use .NETs built in settings system and avoid writing one from scratch The host application will be responsible for locating the plugin settings and passing it to the plugin Each plugin will be responsible for reading and writing its own settings to separate concerns. The host application should call Plugin.Save(thePath) and it does its thing. All settings are user scoped So far, I realize that I would need to write my own SettingsProvider, but the provider seems to work in isolation in that there’s no way to pass it parameters such as the path of the plugin directory and the name of the settings. All of the example code I've seen has the provider getting the data from the runtime environment.

    Read the article

  • Wake On Lan only works on first boot, not sequent ones

    - by sp3ctum
    I have converted my old Dell Latitude D410 laptop to a server for tinkering. It is running an updated Debian Squeeze (6) with a Xen enabled kernel (I want to toy with virtual machines later on). I am running it 'headless' via an ethernet connection. I am struggling to enable Wake On Lan for the box. I have enabled the setting in the BIOS, and it works nicely, but only for the first time after the power cord is plugged in. Here is my test: Plug in power cord, don't boot yet Send magic Wake On Lan packet from test machine (Ubuntu) using the wakeonlan program Server expected to start (does every time) Once server has booted, log in via ssh and shut it down via the operating system After shutdown, wake server up via WOL again (fails every time) Some observations: Right after step 1 I can see the integrated NIC has a light on. I deduce this means the NIC gets adequate power and that the ethernet cable is connected to my switch. This light is not on after step 4 (the shutdown stage). The light becomes back on after I disconnect and reconnect the power cord, after which WOL works as well. After step 4 I can verify that wake on lan is enabled via the ethtool program (repeatable each time) This blog post suggested the problem may lay in the fact the motherboard might not be giving adequate power to the NIC after shutdown, so I copied an acpitool script that supposedly should signal the system to give the needed power to the card when shut down. Obviously it did not fix my issue. I have included the relevant power settings in the paste below. I have tried different combinations of parameters of shutdown (the program) options, as well as the poweroff program. I even tried "telinit 0", which I figured would do the most direct boot via software. If I keep the laptop's power button pressed down and do a hard boot this way, the light on the ethernet port stays lit and a WOL is possible. I copied a bunch of hopefully useful information in this paste I have tried this with the laptop battery connected and without it. I get the same result. Promptly pressing the power button causes the system to shut down with the message "The system is going down for system halt NOW!", and WOL is still unsuccessful.

    Read the article

  • Java: how to do fast copy of a BufferedImage's pixels? (include unit test)

    - by WizardOfOdds
    I want to do a copy (of a rectangle area) of the ARGB values from a source BufferedImage into a destination BufferedImage. No compositing should be done: if I copy a pixel with an ARGB value of 0x8000BE50 (alpha value at 128), then the destination pixel must be exactly 0x8000BE50, totally overriding the destination pixel. I've got a very precise question and I made a unit test to show what I need. The unit test is fully functional and self-contained and is passing fine and is doing precisely what I want. However, I want a faster and more memory efficient method to replace copySrcIntoDstAt(...). That's the whole point of my question: I'm not after how to "fill" the image in a faster way (what I did is just an example to have a unit test). All I want is to know what would be a fast and memory efficient way to do it (ie fast and not creating needless objects). The proof-of-concept implementation I've made is obviously very memory efficient, but it is slow (doing one getRGB and one setRGB for every pixel). Schematically, I've got this: (where A indicates corresponding pixels from the destination image before the copy) AAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAA And I want to have this: AAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAABBBBAAA AAAAAAAAAAAAABBBBAAA AAAAAAAAAAAAAAAAAAAA where 'B' represents the pixels from the src image. I'm looking for an exact replacement of the method, not for an API link/quote. import org.junit.Test; import java.awt.image.BufferedImage; import static org.junit.Assert.*; public class TestCopy { private static final int COL1 = 0x8000BE50; // alpha at 128 private static final int COL2 = 0x1732FE87; // alpha at 23 @Test public void testPixelsCopy() { final BufferedImage src = new BufferedImage( 5, 5, BufferedImage.TYPE_INT_ARGB ); final BufferedImage dst = new BufferedImage( 20, 20, BufferedImage.TYPE_INT_ARGB ); convenienceFill( src, COL1 ); convenienceFill( dst, COL2 ); copySrcIntoDstAt( src, dst, 3, 4 ); for (int x = 0; x < dst.getWidth(); x++) { for (int y = 0; y < dst.getHeight(); y++) { if ( x >= 3 && x <= 7 && y >= 4 && y <= 8 ) { assertEquals( COL1, dst.getRGB(x,y) ); } else { assertEquals( COL2, dst.getRGB(x,y) ); } } } } // clipping is unnecessary private static void copySrcIntoDstAt( final BufferedImage src, final BufferedImage dst, final int dx, final int dy ) { // TODO: replace this by a much more efficient method for (int x = 0; x < src.getWidth(); x++) { for (int y = 0; y < src.getHeight(); y++) { dst.setRGB( dx + x, dy + y, src.getRGB(x,y) ); } } } // This method is just a convenience method, there's // no point in optimizing this method, this is not what // this question is about private static void convenienceFill( final BufferedImage bi, final int color ) { for (int x = 0; x < bi.getWidth(); x++) { for (int y = 0; y < bi.getHeight(); y++) { bi.setRGB( x, y, color ); } } } }

    Read the article

  • Do I always have to supply Tkx's -command argument with an anonymous subroutine?

    - by Zaid
    I find it a bit weird that I have to wrap defined subroutines anonymously when specifying the -command argument for Tkx widgets. The example from TkDocs demonstrates this: my $cb = $frm->new_ttk__button ( -text => "Calculate", -command => sub {calculate();} ); sub calculate { $meters = int(0.3048*$feet*10000.0+.5)/10000.0 || ''; } Why doesn't it work when I write -command => &calculate() or -command => \&calculate()?

    Read the article

  • WPF - Why doesn't Microsoft supply a decent set of most-used controls ?

    - by IUsedToBeAPygmy
    I've been playing with WPF for some months now, and I quite like it. But one of the things I don't get is why MS doesn't put a little more effort in helping developers by supplying basic controls, and I need to get this off my chest :) For example, I figure most applications somewhere will need to let you edit some properties - for configuration or whatever. What would be the most used types in a proprety-grid editor ? text numbers (byte, float/double, int, etc) colors ....etc. So why isn't there even something as simple as a control to edit numbers ? Like a generic NumericUpDown control that allows you to type in numbers (no text, no pasting invalid input) or spin them up/down according to some given rules (decimal, floating point, min/maxvalue) ? Why isn't there a generic colorpicker, so people get the same user-experience in every application ? Why isn't there a standard implementation of a SearchTextBox, a BreadCrumb-control, or all these other standard control types users have gotten accustomed to the last 10 years ? (..but at least they DID have the time to implement a generic splashscreen - because everyone knows that greatly increases user-productivity....) The well-known ideal is always to give people the same user-experience over different applications. So even if some of those controls would be easy to make - it would be preferred to have one version over different applications. I see people all over the internet trying to do the same stuff over and over again. Okay, so MS started a WPF Toolkit project on Codeplex that tries to implement some controls, but only did so half-heartedly and is completely dead by now (last update of the roadmap dates back to Mar 21 2009). The result of this is that a lot of people starting a WPF-project end up spending a lot of time on trying to figure out how to create some generic controls and get really frustrated. Wasn't the mantra "Developers, developers, developers!" ..? /Rant

    Read the article

  • Force failover a Cisco ASA

    - by user974896
    I have two ASA in a lan state primary\secondary configuration. None of them have "failover active" or "no failover active" in their configuration. Would it be proper to failover in a manner such as: Log into console of primary unit and issue "failover lan state secondary", log into the console of the original secondary unit and issue "failover lan state primary". To fail back simply reverse the process or Log into the console of the primary unit and issue "no failover active", log into the console of the original secondary unit and issue "failover active". To fail back issue "failover active" on the original primary (now secondary) unit, and "no failover active" on the now primary unit. I do not like the second method because it adds configuration directives that were not in place before. Will the first method work?

    Read the article

  • How can i supply an AntiForgeryToken when posting JSON data using $.ajax ?

    - by HerbalMart
    I am using the code as below of this post: First i will an fill array variable with the correct values for the controller action. Using the code below i think it should be very straigtforward by just adding the following line to the javascript: data["__RequestVerificationToken"] = $('[name=__RequestVerificationToken]').val(); The <%= Html.AntiForgeryToken() %> is at his right place and the action has a [ValidateAntiForgeryToken] But my controller action keeps saying: "Invalid forgery token" What am i doing wrong here? Code data["fiscalyear"] = fiscalyear; data["subgeography"] = $(list).parent().find('input[name=subGeography]').val(); data["territories"] = new Array(); $(items).each(function() { data["territories"].push($(this).find('input[name=territory]').val()); }); if (url != null) { $.ajax( { dataType: 'JSON', contentType: 'application/json; charset=utf-8', url: url, type: 'POST', context: document.body, data: JSON.stringify(data), success: function() { refresh(); } }); }

    Read the article

  • How do I supply a variable to put in a another variable in PHP?

    - by Jared
    I'm not sure if I asked the question correctly. I have some code I am trying to embed. For instance: $menuPopup ='<IMG SRC="' . $someVariable . '">'; Later on, I have the a few product variables: $someProduct1 ='image1.jpg'; $someProduct2 ='image2.jpg'; Later on, want to display the $menuPopup, using a src from $someProduct1, or $someProduct2. //Pseudo Code $menuPopup ( $someProduct1); Anyway to do that?

    Read the article

  • Is it possible to supply the jQuery Validation plugin with a rules object?

    - by ethagnawl
    I took the rules that I'd been using to validate my form and broke them out into a JSON object, hoping to use the same set of rules for client/server validation. So, where I had: $(document.getElementById('new_listing_form')).validate({ rules: { apt: { required: true } } }); I'm attempting to use: var rules_obj = { apt: { required: true } }; $(document.getElementById('new_listing_form')).validate({ rules: rules_obj }); ... and the form is not being validated. Does anyone know if this is possible?

    Read the article

  • Is it possible to supply template parameters when calling operator()?

    - by Paul
    I'd like to use a template operator() but am not sure if it's possible. Here is a simple test case that won't compile. Is there something wrong with my syntax, or is this simply not possible? struct A { template<typename T> void f() { } template<typename T> void operator()() { } }; int main() { A a; a.f<int>(); // This compiles. a.operator()<int>(); // This compiles. a<int>(); // This won't compile. return 0; }

    Read the article

  • How can I download a script using Javascript and supply a header to the GET?

    - by Alan
    I have this method for downloading a script: var script = document.createElement('script'); script.setAttribute("src", url); document.getElementsByTagName("head")[0].appendChild(script); This gives me a GET like this: GET http://127.0.0.1:17315/Scripts/abc.js HTTP/1.1 However I need to add a header: Authorization: Bearer Ipnsfm9h1MWYIM0n1ng Can anyone tell me how I can add a header when I am using Javascript to perform the GET?

    Read the article

  • Questions, Knowledge Checks and Assessments

    - by ted.henson
    Questions should be used to reinforce concepts throughout the title. You have the option to include questions in the course, in assessments, in Knowledge Checks, or in any combination. Questions are required for creating knowledge checks and assessments. It is important to remember that questions that are not in assessments are not tracked. Be sure to structure your outline so that questions are added to the appropriate assignable unit. I usually recommend that questions appear directly below their relative section. This serves two purposes. First, it helps ensure that the related content and question stay relative to one another. Secondly, it ensures that when the "link to subject" option is used it will relate back to the relative content. Knowledge checks are created using the questions that have been added to the related assignable unit. Use Knowledge Checks to give users an additional opportunity to review what they have learned. Knowledge Check allows users to check their own knowledge without being tracked or scored. Many users like having this self check option, especially if they know they are going to be tested later. Each assignable unit can have its own Knowledge Check. Assessments provide a way to measure knowledge or understanding of the course material. The results of each assessment are scored and tracked. Assessments are created using the questions that have been added to the relative assignable unit(s). Each assignable unit, including the Title AU, can have multiple assessments. Consider how your knowledge paths will be structured when planning your assessments. For instance, you can create a multiple-activity knowledge path, with multiple assessments from the same title or assignable unit. Also remember, in Manager an assessment can be either a pre or post assessment. Pre-assessments allow the student to discover what is already known in a specific topic or subject and important if the personal course feature is being used. Post-assessments allow you test the student knowledge or understanding after completing the material.

    Read the article

  • How do I make changes to /proc/acpi/wakeup permanent?

    - by Jolan
    I had a problem with my Ubuntu 12.04 waking up immediately after going into suspend. I solved the problem by changing the settings in /proc/acpi/wakeup, as suggested in this question: How do I prevent immediate wake up from suspend?. After changing the settings, the system goes flawlessly into suspend and stays suspended, but after I wake it back up, the settings in /proc/acpi/wakeup are different from what I set them to. Before going to suspend: cat /proc/acpi/wakeup Device S-state Status Sysfs node SMB0 S4 *disabled pci:0000:00:03.2 PBB0 S4 *disabled pci:0000:00:09.0 HDAC S4 *disabled pci:0000:00:08.0 XVR0 S4 *disabled pci:0000:00:0c.0 XVR1 S4 *disabled P0P5 S4 *disabled P0P6 S4 *disabled pci:0000:00:15.0 GLAN S4 *enabled pci:0000:03:00.0 P0P7 S4 *disabled pci:0000:00:16.0 P0P8 S4 *disabled P0P9 S4 *disabled USB0 S3 *disabled pci:0000:00:04.0 USB2 S3 *disabled pci:0000:00:04.1 US15 S3 *disabled pci:0000:00:06.0 US12 S3 *disabled pci:0000:00:06.1 PWRB S4 *enabled SLPB S4 *enabled I tell the system to suspend, and it works as it should. But later after waking it up, the settings are changed to either: USB0 S3 *disabled pci:0000:00:04.0 USB2 S3 *enabled pci:0000:00:04.1 US15 S3 *disabled pci:0000:00:06.0 US12 S3 *enabled pci:0000:00:06.1 or USB0 S3 *enabled pci:0000:00:04.0 USB2 S3 *enabled pci:0000:00:04.1 US15 S3 *enabled pci:0000:00:06.0 US12 S3 *enabled pci:0000:00:06.1 Any ideas? Thank you for your response. Unfortunately it did not solve my problem. all of /sys/bus/usb/devices/usb1/power/wakeup /sys/bus/usb/devices/usb2/power/wakeup /sys/bus/usb/devices/usb3/power/wakeup /sys/bus/usb/devices/usb4/power/wakeup as well as /sys/bus/usb/devices/3-1/power/wakeup are set to disabled, and the notebook still wakes up by itself right after going to sleep. The only thing it seems to react to are the settings in /proc/acpi/wakeup, which keep changing (resetting) every time i power off/restart my notebook.

    Read the article

  • Microsoft&rsquo;s new technical computing initiative

    - by Randy Walker
    I made a mental note from earlier in the year.  Microsoft literally buys computers by the truckload.  From what I understand, it’s a typical practice amongst large software vendors.  You plug a few wires in, you test it, and you instantly have mega tera tera flops (don’t hold me to that number).  Microsoft has been trying to plug away at their cloud services (named Azure).  Which, for the layman, means Microsoft runs your software on their computers, and as demand increases you can allocate more computing power on the fly. With this in mind, it doesn’t surprise me that I was recently sent an executive email concerning Microsoft’s new technical computing initiative.  I find it to be a great marketing idea with actual substance behind their real work.  From the programmer academic perspective, in college we dreamed about this type of processing power.  This has decades of computer science theory behind it. A copy of the email received.  (note that I almost deleted this email, thinking it was spam due to it’s length) We don't often think about how complex life really is. Take the relatively simple task of commuting to and from work: it is, in fact, a complicated interplay of variables such as weather, train delays, accidents, traffic patterns, road construction, etc. You can however, take steps to shorten your commute - using a good, predictive understanding of a few of these variables. In fact, you probably are already taking these inputs and instinctively building a predictive model that you act on daily to get to your destination more quickly. Now, when we apply the same method to very complex tasks, this modeling approach becomes much more challenging. Recent world events clearly demonstrated our inability to process vast amounts of information and variables that would have helped to more accurately predict the behavior of global financial markets or the occurrence and impact of a volcano eruption in Iceland. To make sense of issues like these, researchers, engineers and analysts create computer models of the almost infinite number of possible interactions in complex systems. But, they need increasingly more sophisticated computer models to better understand how the world behaves and to make fact-based predictions about the future. And, to do this, it requires a tremendous amount of computing power to process and examine the massive data deluge from cameras, digital sensors and precision instruments of all kinds. This is the key to creating more accurate and realistic models that expose the hidden meaning of data, which gives us the kind of insight we need to solve a myriad of challenges. We have made great strides in our ability to build these kinds of computer models, and yet they are still too difficult, expensive and time consuming to manage. Today, even the most complicated data-rich simulations cannot fully capture all of the intricacies and dependencies of the systems they are trying to model. That is why, across the scientific and engineering world, it is so hard to say with any certainty when or where the next volcano will erupt and what flight patterns it might affect, or to more accurately predict something like a global flu pandemic. So far, we just cannot collect, correlate and compute enough data to create an accurate forecast of the real world. But this is about to change. Innovations in technology are transforming our ability to measure, monitor and model how the world behaves. The implication for scientific research is profound, and it will transform the way we tackle global challenges like health care and climate change. It will also have a huge impact on engineering and business, delivering breakthroughs that could lead to the creation of new products, new businesses and even new industries. Because you are a subscriber to executive e-mails from Microsoft, I want you to be the first to know about a new effort focused specifically on empowering millions of the world's smartest problem solvers. Today, I am happy to introduce Microsoft's Technical Computing initiative. Our goal is to unleash the power of pervasive, accurate, real-time modeling to help people and organizations achieve their objectives and realize their potential. We are bringing together some of the brightest minds in the technical computing community across industry, academia and science at www.modelingtheworld.com to discuss trends, challenges and shared opportunities. New advances provide the foundation for tools and applications that will make technical computing more affordable and accessible where mathematical and computational principles are applied to solve practical problems. One day soon, complicated tasks like building a sophisticated computer model that would typically take a team of advanced software programmers months to build and days to run, will be accomplished in a single afternoon by a scientist, engineer or analyst working at the PC on their desktop. And as technology continues to advance, these models will become more complete and accurate in the way they represent the world. This will speed our ability to test new ideas, improve processes and advance our understanding of systems. Our technical computing initiative reflects the best of Microsoft's heritage. Ever since Bill Gates articulated the then far-fetched vision of "a computer on every desktop" in the early 1980's, Microsoft has been at the forefront of expanding the power and reach of computing to benefit the world. As someone who worked closely with Bill for many years at Microsoft, I am happy to share with you that the passion behind that vision is fully alive at Microsoft and is carried out in the creation of our new Technical Computing group. Enabling more people to make better predictions We have seen the impact of making greater computing power more available firsthand through our investments in high performance computing (HPC) over the past five years. Scientists, engineers and analysts in organizations of all sizes and sectors are finding that using distributed computational power creates societal impact, fuels scientific breakthroughs and delivers competitive advantages. For example, we have seen remarkable results from some of our current customers: Malaria strikes 300,000 to 500,000 people around the world each year. To help in the effort to eradicate malaria worldwide, scientists at Intellectual Ventures use software that simulates how the disease spreads and would respond to prevention and control methods, such as vaccines and the use of bed nets. Technical computing allows researchers to model more detailed parameters for more accurate results and receive those results in less than an hour, rather than waiting a full day. Aerospace engineering firm, a.i. solutions, Inc., needed a more powerful computing platform to keep up with the increasingly complex computational needs of its customers: NASA, the Department of Defense and other government agencies planning space flights. To meet that need, it adopted technical computing. Now, a.i. solutions can produce detailed predictions and analysis of the flight dynamics of a given spacecraft, from optimal launch times and orbit determination to attitude control and navigation, up to eight times faster. This enables them to avoid mistakes in any areas that can cause a space mission to fail and potentially result in the loss of life and millions of dollars. Western & Southern Financial Group faced the challenge of running ever larger and more complex actuarial models as its number of policyholders and products grew and regulatory requirements changed. The company chose an actuarial solution that runs on technical computing technology. The solution is easy for the company's IT staff to manage and adjust to meet business needs. The new solution helps the company reduce modeling time by up to 99 percent - letting the team fine-tune its models for more accurate product pricing and financial projections. Our Technical Computing direction Collaborating closely with partners across industry and academia, we must now extend the reach of technical computing even further to help predictive modelers and data explorers make faster, more accurate predictions. As we build the Technical Computing initiative, we will invest in three core areas: Technical computing to the cloud: Microsoft will play a leading role in bringing technical computing power to scientists, engineers and analysts through the cloud. Existing high- performance computing users will benefit from the ability to augment their on-premises systems with cloud resources that enable 'just-in-time' processing. This platform will help ensure processing resources are available whenever they are needed-reliably, consistently and quickly. Simplify parallel development: Today, computers are shipping with more processing power than ever, including multiple cores, but most modern software only uses a small amount of the available processing power. Parallel programs are extremely difficult to write, test and trouble shoot. However, a consistent model for parallel programming can help more developers unlock the tremendous power in today's modern computers and enable a new generation of technical computing. We are delivering new tools to automate and simplify writing software through parallel processing from the desktop... to the cluster... to the cloud. Develop powerful new technical computing tools and applications: We know scientists, engineers and analysts are pushing common tools (i.e., spreadsheets and databases) to the limits with complex, data-intensive models. They need easy access to more computing power and simplified tools to increase the speed of their work. We are building a platform to do this. Our development efforts will yield new, easy-to-use tools and applications that automate data acquisition, modeling, simulation, visualization, workflow and collaboration. This will allow them to spend more time on their work and less time wrestling with complicated technology. Thinking bigger There is so much left to be discovered and so many questions yet to be answered in the fascinating world around us. We believe the technical computing community will show us that we have not seen anything yet. Imagine just some of the breakthroughs this community could make possible: Better predictions to help improve the understanding of pandemics, contagion and global health trends. Climate change models that predict environmental, economic and human impact, accessible in real-time during key discussions and debates. More accurate prediction of natural disasters and their impact to develop more effective emergency response plans. With an ambitious charter in hand, this new team is ready to build on our progress to-date and execute Microsoft's technical computing vision over the months and years ahead. We will steadily invest in the right technologies, tools and talent, and work to bring together the technical computing community. I invite you to visit www.modelingtheworld.com today. We welcome your ideas and feedback. I look forward to making this journey with you and others who want to answer the world's biggest questions, discover solutions to problems that seem impossible and uncover a host of new opportunities to change the world we live in for the better. Bob

    Read the article

  • How to manage a multiplayer asynchronous environment in a game

    - by Phil
    I'm working on a game where players can setup villages, which can contain defending units. Any of these units (each on their own tiles) can be set to "campaign" which means they are no longer defending but can now be used to attack other villages. And each unit on a tile can have up to a 100 health. So far so good. Oh and it's all asynchronous so even though the server will be aware that your village is being attacked, you won't be until the attack is over. The issue I'm struggling with, is the following situation. Let's say a unit on a tile is being attacked by a player from another village. The other player see's your village and is attacking your units. You don't know this is happening though, so you set your unit to campaign and off you go to attack another village, with the unit which itself is actually being attacked by this other player. The other player stops attacking your village and leaves your unit with say a health of 1, which is then saved to the server. You however have this same unit are attacking another village with it, but now you discover that even though it started off with a 100 health, now mysteriously it only has 1... Solutions? Ideas? Edit The simplest solutions are often the best. I referred to Clash of clans below, well after a bit more digging it seems that in CoC you can only attack players that are offline! ha, that almost solves the problem. I say almost because there's still the situation where a players village could be in the process of being attacked when they come back online, still need to address that. Edit 2 A solution to the "What happens when a player is attacking your village and you come online" issue, could be the attacking player just get's kicked out of the village at that point and just get's whatever they had won up to that point, it's a bit of a fudge but it might work.

    Read the article

  • CodeStock 2012 Review: Eric Landes( @ericlandes ) - Automated Tests in to automated Builds! How to put the right type of automated tests in to the right automated builds.

    Automated Tests in to automated Builds! How to put the right type of automated tests in to the right automated builds.Speaker: Eric LandesTwitter: @ericlandesBlog: http://ericlandes.com/ This was one of the first sessions I attended during CodeStock 2012. Eric’s talk focused mostly on unit testing, and that the lack of proper unit testing can be compared to stealing from an employer. His point was that if you’re not doing proper unit testing then all of the time wasted on fixing issues that could have been detected with unit tests is like stealing money from employer. He makes the assumption that that time spent on fixing these issues could have been better spent developing new features that drive the business. To a point I can agree with Eric’s argument regarding unit testing and stealing from a company’s perspective. I can see how he relates resources being shifted from new development to bug fixes as stealing based on the fact that the resources used to fix bugs are directly taken from other projects. He also states that Boring/Redundant and Build/Test tasks should be automated because it reduces the changes of errors and frees up developer to do what they do best, DEVELOP! When he refers to testing, he breaks testing down in to four distinct types. Unit Test Acceptance Test (This also includes Integration Tests) Performance Test UI Test With this he also recommends that developers should not go buck wild striving for 100% code coverage because some test my not provide a great return on investment. In his experience he recommends that 70% test coverage was a very acceptable rate.

    Read the article

  • How do I re-enable the backlight?

    - by Scott Severance
    Since Oneiric, if I leave my machine (HP Mini 110 netbook) unattended and it goes into power-save mode, the backlight gets disabled. How can I turn it back on? Note that the keyboard backlight controls (Fn+F4 and Fn+F3) don't have any effect in this situation. I've already filed a bug, but filing a bug doesn't fix my problem. I tried this workaround posted in this bug report dealing with Acer laptops: sudo setpci -s 00:02.0 F4.B=0 However, if anything, that command makes things worse. In the general case, I can see a little bit if I'm in a dark room with a flashlight aimed just so. But after running setpci I can't see anything. And I find the setpci documentation to be utterly incomprehensible, so I don't know whether I need to tweak my command somehow or whether I'm completely barking up the wrong tree. Update: I've found a workaround: I'm now booting with the kernel parameter acpi=off. This disables power management, which prevents the machine from going into power saving mode and thus failing to come back up correctly. Of course, not having power management means that I can't use suspend or do anything to manage power other than powering it off (even then, I have to manually use the power switch). Also, it prevents me from using Unity 3D or Gnome Shell, forcing me into Unity 2C or Gnome Classic. So, I'd really like to be able to stop using this hack.

    Read the article

  • CodeStock 2012 Review: Eric Landes( @ericlandes ) - Automated Tests in to automated Builds! How to put the right type of automated tests in to the right automated builds.

    Automated Tests in to automated Builds! How to put the right type of automated tests in to the right automated builds.Speaker: Eric LandesTwitter: @ericlandesBlog: http://ericlandes.com/ This was one of the first sessions I attended during CodeStock 2012. Eric’s talk focused mostly on unit testing, and that the lack of proper unit testing can be compared to stealing from an employer. His point was that if you’re not doing proper unit testing then all of the time wasted on fixing issues that could have been detected with unit tests is like stealing money from employer. He makes the assumption that that time spent on fixing these issues could have been better spent developing new features that drive the business. To a point I can agree with Eric’s argument regarding unit testing and stealing from a company’s perspective. I can see how he relates resources being shifted from new development to bug fixes as stealing based on the fact that the resources used to fix bugs are directly taken from other projects. He also states that Boring/Redundant and Build/Test tasks should be automated because it reduces the changes of errors and frees up developer to do what they do best, DEVELOP! When he refers to testing, he breaks testing down in to four distinct types. Unit Test Acceptance Test (This also includes Integration Tests) Performance Test UI Test With this he also recommends that developers should not go buck wild striving for 100% code coverage because some test my not provide a great return on investment. In his experience he recommends that 70% test coverage was a very acceptable rate.

    Read the article

  • Problems moving a rectangle in Pygame.

    - by Yann Core
    Hi guys! I'm making a game in Pygame and I want to be able to target enemy unit. I made it so when I click on them a variable "targeted" becomes true, and stays true until I click somewhere else on the screen. I also want targeted units to have a small green circle around them, so I made it in GEDIT. I have made a function that draws everything on the screen (the background, the player, objects, etc) and in the part where it draws the units it checks if the variable "targeted" is true and if it is it should move that little green circle over the enemy units. here is the code that does that: screen.blit(enemy_unit.pic, enemy_unit.rect) #draw the unit if enemy_unit.targeted == True: #if the unit has been targeted then draw a circle over it target_rect.move_ip(enemy_unit.pos) #move the circle to the unit target_rect.fit(enemy_unit.rect) #there are some bigger units and some smaller ones, so we have to "scale" the circle screen.blit(target_pic, target_rect) #actually draw the circle This doesn't work, when I target the unit the circle just appears for a 1/5 of second next (not on, but just next) to the unit and then disappears. I am sure that I am keeping a good track of "enemy_unit.pos" because I tested it (I added a piece of code that would print one units position and mouse's position every time i clicked the mouse and when i was near him the numbers were same). If you could give me a hint about what I'm doing wrong. I think its in move_ip function, but I tried just move and it didn't work either (the circle didn't even show at all)!

    Read the article

< Previous Page | 113 114 115 116 117 118 119 120 121 122 123 124  | Next Page >