Search Results

Search found 4934 results on 198 pages for 'math round'.

Page 8/198 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • How to JSON serialize math vector type in F#?

    - by The_Ghost
    Hello! I'm trying to serialize "vector" (Microsoft.FSharp.Math) type. And I get that error: Exception Details: System.Runtime.Serialization.SerializationException: Type 'Microsoft.FSharp.Math.Instances+FloatNumerics@115' with data contract name 'Instances.FloatNumerics_x0040_115:http://schemas.datacontract.org/2004/07/Microsoft.FSharp.Math' is not expected. Add any types not known statically to the list of known types - for example, by using the KnownTypeAttribute attribute or by adding them to the list of known types passed to DataContractSerializer. I have tried to put KnownType attribute and some other stuff, but nothing helps! Could someone know the answer? This is the code I use: // [< KnownType( typeof<vector> ) >] type MyType = vector let public writeTest = let aaa = vector [1.1;2.2] let serializer = new DataContractJsonSerializer( typeof<MyType> ) let writer = new StreamWriter( @"c:\test.txt" ) serializer.WriteObject(writer.BaseStream, aaa) writer.Close()

    Read the article

  • snapping an angle to the closest cardinal direction

    - by Josh E
    I'm developing a 2D sprite-based game, and I'm finding that I'm having trouble with making the sprites rotate correctly. In a nutshell, I've got spritesheets for each of 5 directions (the other 3 come from just flipping the sprite horizontally), and I need to clamp the velocity/rotation of the sprite to one of those directions. My sprite class has a pre-computed list of radians corresponding to the cardinal directions like this: protected readonly List<float> CardinalDirections = new List<float> { MathHelper.PiOver4, MathHelper.PiOver2, MathHelper.PiOver2 + MathHelper.PiOver4, MathHelper.Pi, -MathHelper.PiOver4, -MathHelper.PiOver2, -MathHelper.PiOver2 + -MathHelper.PiOver4, -MathHelper.Pi, }; Here's the positional update code: if (velocity == Vector2.Zero) return; var rot = ((float)Math.Atan2(velocity.Y, velocity.X)); TurretRotation = SnapPositionToGrid(rot); var snappedX = (float)Math.Cos(TurretRotation); var snappedY = (float)Math.Sin(TurretRotation); var rotVector = new Vector2(snappedX, snappedY); velocity *= rotVector; //...snip private float SnapPositionToGrid(float rotationToSnap) { if (rotationToSnap == 0) return 0.0f; var targetRotation = CardinalDirections.First(x => (x - rotationToSnap >= -0.01 && x - rotationToSnap <= 0.01)); return (float)Math.Round(targetRotation, 3); } What am I doing wrong here? I know that the SnapPositionToGrid method is far from what it needs to be - the .First(..) call is on purpose so that it throws on no match, but I have no idea how I would go about accomplishing this, and unfortunately, Google hasn't helped too much either. Am I thinking about this the wrong way, or is the answer staring at me in the face?

    Read the article

  • Polygon is rotating too fast

    - by Manderin87
    I am going to be using a polygon collision detection method to test when objects collide. I am attempting to rotate a polygon to match the sprites rotation. However, the polygon is rotating too fast, much faster than the sprite is. I feel its a timing issue, but the sprite rotates like it is supposed to. Can anyone look at my code and tell me what could be causing this issue? public void rotate(float x0, float y0, double angle) { for(Point point : mPoints) { float x = (float) (x0 + (point.x - x0) * Math.cos(Utilities.toRadians(angle)) - (point.y - y0) * Math.sin(Utilities.toRadians(angle))); float y = (float) (y0 + (point.x - x0) * Math.sin(Utilities.toRadians(angle)) + (point.y - y0) * Math.cos(Utilities.toRadians(angle))); point.x = x; point.y = y; } } This algorithm works when done singly, but once I plug it into the update method the rotation is too fast. The Points used are: P1 608, 368 P2 640, 464 P3 672, 400 Origin x0 is: 640 400 The angle goes from 0 to 360 as the sprite rotates. When the codes executes the triangle looks like a star because its moving so fast. The rotation is done in the sprites update method. The rotation method just increases the sprites degree by .5 when it executes. public void update() { if(isActive()) { rotate(); mBounding.rotate(mPosition.x, mPosition.y, mDegree); } }

    Read the article

  • Making user input/math on data fast, unlike excel type programs

    - by proGrammar
    I'm creating a research platform solely for myself to do some research on data. Programs like excel are terribly slow for me so I'm trying to come up with another solution. Originally I used excel. A1 was the cell that contained the data and all other cells in use calculated something on A1, or on other cells, that all could be in the end traced to A1. A1 was like an element of an array, I then I incremented it to go through all my data. This was way too slow. So the only other option I found originally was to hand code in c# the calculations inside a loop. Then I simply recompiled each time I changed my math. This was terribly slow to do and I had to order everything correctly so things would update correctly (dependencies). I could have also used events, but hand coding events for each cell like calculation would also be very slow. Next I created an application to read Excel and to perfectly imitate it. Which is what I now use. Basically I write formulas onto a fraction of my data to get live results inside excel. Then my program reads excel, writes another c# program, compiles it, and runs that program which runs my excel created formulas through a lot more data a whole lot faster. The advantage being my application dependency sorts everything (or I could use events) so I don't have to (like excel does) And of course the speed. But now its not a single application anymore. Instead its 2 applications, one which only reads my formulas and writes another program. The other one being the result which only lives for a short while before I do other runs through my data with different formulas / settings. So I can't see multiple results at one time without introducing even more programs like a database or at least having the 2 applications talking to each other. My idea was to have a dll that would be written, compiled, loaded, and unloaded again and again. So a self-updating program, sort of. But apparently that's not possible without another appdomain which means data has to be marshaled to be moved between the appdomains. Which would slow things down, not for summaries, but for other stuff I need to do with all my data. I'm also forgetting to mention a huge problem with restarting an application again and again which is having to reload ALL my data into memory again and again. But its still a whole lot faster than excel. I'm really super puzzled as to what people do when they want to research data fast. I'm completely unable to have a program accept user input and having it fast. My understanding is that it would have to do things like excel which is to evaluate strings again and again. So my only option is to repeatedly compile applications. Do I have a correct understanding on computer science? I've only just began programming, and didn't think I would have to learn much to do some simple math on data. My understanding is its either compiling my user defined stuff to a program or evaluating them from a string or something stupid again and again. And my only option is to probably switch operating systems or something to be able to have a program compile and run itself without stopping (writing/compiling dll, loading dll to program, unloading, and repeating). Can someone give me some idea on how computers work? Is anything better possible? Like a running program, that can accept user input and compile it and then unload it later? I mean heck operating systems dont need to be RESTARTED with every change to user input. What is this the cave man days? Sorry, it's just so super frustrating not knowing what one can do, and can't do. If only I could understand and learn this stuff fast enough.

    Read the article

  • First round playing with Memcached

    - by Shaun
    To be honest I have not been very interested in the caching before I’m going to a project which would be using the multi-site deployment and high connection and concurrency and very sensitive to the user experience. That means we must cache the output data for better performance. After looked for the Internet I finally focused on the Memcached. What’s the Memcached? I think the description on its main site gives us a very good and simple explanation. Free & open source, high-performance, distributed memory object caching system, generic in nature, but intended for use in speeding up dynamic web applications by alleviating database load. Memcached is an in-memory key-value store for small chunks of arbitrary data (strings, objects) from results of database calls, API calls, or page rendering. Memcached is simple yet powerful. Its simple design promotes quick deployment, ease of development, and solves many problems facing large data caches. Its API is available for most popular languages. The original Memcached was built on *nix system are is being widely used in the PHP world. Although it’s not a problem to use the Memcached installed on *nix system there are some windows version available fortunately. Since we are WISC (Windows – IIS – SQL Server – C#, which on the opposite of LAMP) it would be much easier for us to use the Memcached on Windows rather than *nix. I’m using the Memcached Win X64 version provided by NorthScale. There are also the x86 version and other operation system version.   Install Memcached Unpack the Memcached file to a folder on the machine you want it to be installed, we can see that there are only 3 files and the main file should be the “memcached.exe”. Memcached would be run on the server as a service. To install the service just open a command windows and navigate to the folder which contains the “memcached.exe”, let’s say “C:\Memcached\”, and then type “memcached.exe -d install”. If you are using Windows Vista and Windows 7 system please be execute the command through the administrator role. Right-click the command item in the start menu and use “Run as Administrator”, otherwise the Memcached would not be able to be installed successfully. Once installed successful we can type “memcached.exe -d start” to launch the service. Now it’s ready to be used. The default port of Memcached is 11211 but you can change it through the command argument. You can find the help by typing “memcached -h”.   Using Memcached Memcahed has many good and ready-to-use providers for vary program language. After compared and reviewed I chose the Memcached Providers. It’s built based on another 3rd party Memcached client named enyim.com Memcached Client. The Memcached Providers is very simple to set/get the cached objects through the Memcached servers and easy to be configured through the application configuration file (aka web.config and app.config). Let’s create a console application for the demonstration and add the 3 DLL files from the package of the Memcached Providers to the project reference. Then we need to add the configuration for the Memcached server. Create an App.config file and firstly add the section on top of it. Here we need three sections: the section for Memcached Providers, for enyim.com Memcached client and the log4net. 1: <configSections> 2: <section name="cacheProvider" 3: type="MemcachedProviders.Cache.CacheProviderSection, MemcachedProviders" 4: allowDefinition="MachineToApplication" 5: restartOnExternalChanges="true"/> 6: <sectionGroup name="enyim.com"> 7: <section name="memcached" 8: type="Enyim.Caching.Configuration.MemcachedClientSection, Enyim.Caching"/> 9: </sectionGroup> 10: <section name="log4net" 11: type="log4net.Config.Log4NetConfigurationSectionHandler,log4net"/> 12: </configSections> Then we will add the configuration for 3 of them in the App.config file. The Memcached server information would be defined under the enyim.com section since it will be responsible for connect to the Memcached server. Assuming I installed the Memcached on two servers with the default port, the configuration would be like this. 1: <enyim.com> 2: <memcached> 3: <servers> 4: <!-- put your own server(s) here--> 5: <add address="192.168.0.149" port="11211"/> 6: <add address="10.10.20.67" port="11211"/> 7: </servers> 8: <socketPool minPoolSize="10" maxPoolSize="100" connectionTimeout="00:00:10" deadTimeout="00:02:00"/> 9: </memcached> 10: </enyim.com> Memcached supports the multi-deployment which means you can install the Memcached on the servers as many as you need. The protocol of the Memcached responsible for routing the cached objects into the proper server. So it’s very easy to scale-out your system by Memcached. And then define the Memcached Providers configuration. The defaultExpireTime indicates how long the objected cached in the Memcached would be expired, the default value is 2000 ms. 1: <cacheProvider defaultProvider="MemcachedCacheProvider"> 2: <providers> 3: <add name="MemcachedCacheProvider" 4: type="MemcachedProviders.Cache.MemcachedCacheProvider, MemcachedProviders" 5: keySuffix="_MySuffix_" 6: defaultExpireTime="2000"/> 7: </providers> 8: </cacheProvider> The last configuration would be the log4net. 1: <log4net> 2: <!-- Define some output appenders --> 3: <appender name="ConsoleAppender" type="log4net.Appender.ConsoleAppender"> 4: <layout type="log4net.Layout.PatternLayout"> 5: <conversionPattern value="%date [%thread] %-5level %logger [%property{NDC}] - %message%newline"/> 6: </layout> 7: </appender> 8: <!--<threshold value="OFF" />--> 9: <!-- Setup the root category, add the appenders and set the default priority --> 10: <root> 11: <priority value="WARN"/> 12: <appender-ref ref="ConsoleAppender"> 13: <filter type="log4net.Filter.LevelRangeFilter"> 14: <levelMin value="WARN"/> 15: <levelMax value="FATAL"/> 16: </filter> 17: </appender-ref> 18: </root> 19: </log4net>   Get, Set and Remove the Cached Objects Once we finished the configuration it would be very simple to consume the Memcached servers. The Memcached Providers gives us a static class named DistCache that can be used to operate the Memcached servers. Get<T>: Retrieve the cached object from the Memcached servers. If failed it will return null or the default value. Add: Add an object with a unique key into the Memcached servers. Assuming that we have an operation that retrieve the email from the name which is time consuming. This is the operation that should be cached. The method would be like this. I utilized Thread.Sleep to simulate the long-time operation. 1: static string GetEmailByNameSlowly(string name) 2: { 3: Thread.Sleep(2000); 4: return name + "@ethos.com.cn"; 5: } Then in the real retrieving method we will firstly check whether the name, email information had been searched previously and cached. If yes we will just return them from the Memcached, otherwise we will invoke the slowly method to retrieve it and then cached. 1: static string GetEmailByName(string name) 2: { 3: var email = DistCache.Get<string>(name); 4: if (string.IsNullOrEmpty(email)) 5: { 6: Console.WriteLine("==> The name/email not be in memcached so need slow loading. (name = {0})==>", name); 7: email = GetEmailByNameSlowly(name); 8: DistCache.Add(name, email); 9: } 10: else 11: { 12: Console.WriteLine("==> The name/email had been in memcached. (name = {0})==>", name); 13: } 14: return email; 15: } Finally let’s finished the calling method and execute. 1: static void Main(string[] args) 2: { 3: var name = string.Empty; 4: while (name != "q") 5: { 6: Console.Write("==> Please enter the name to find the email: "); 7: name = Console.ReadLine(); 8:  9: var email = GetEmailByName(name); 10: Console.WriteLine("==> The email of {0} is {1}.", name, email); 11: } 12: } The first time I entered “ziyanxu” it takes about 2 seconds to get the email since there’s nothing cached. But the next time I entered “ziyanxu” it returned very quickly from the Memcached.   Summary In this post I explained a bit on why we need cache, what’s Memcached and how to use it through the C# application. The example is fairly simple but hopefully demonstrated on how to use it. Memcached is very easy and simple to be used since it gives you the full opportunity to consider what, when and how to cache the objects. And when using Memcached you don’t need to consider the cache servers. The Memcached would be like a huge object pool in front of you. The next step I’m thinking now are: What kind of data should be cached? And how to determined the key? How to implement the cache as a layer on top of the business layer so that the application will not notice that the cache is there. How to implement the cache by AOP so that the business logic no need to consider the cache. I will investigate on them in the future and will share my thoughts and results.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Isometric - precise screen coordinates to isometric

    - by Rawrz
    I'm trying to translate mouse coords to precise isometric coords (I can already find the tile the mouse is over, but I want it to be more precise). I've tried several different methods but I seem to keep falling short. For drawing I use: batch.draw( texture, (y * tileWidth / 2) + (x * tileWidth / 2), (x * tileHeight / 2) - (y * tileHeight / 2)) This is what I currently use for figuring out a tile position: float xt = x + camPosition.x - (ScreenWidth/2) ; float yt = (ScreenHeight) - y + camPosition.y - (ScreenHeight/2); int tileY = Math.round((((xt) / tileWidth) - ((yt) / tileHeight))); int tileX = Math.round((((xt) / tileWidth) + ((yt) / tileHeight))- 1); I'm just wondering how I could update these to allow for more precise coordinates, instead of tile only. EDIT: Following what ccxvii said below, and removing the -1 from tileX, the object follows my mouse just like I had wanted. Just going to re-examine the math and figure out if that change will result in other messes =o

    Read the article

  • Plan Operator Tuesday round-up

    - by Rob Farley
    Eighteen posts for T-SQL Tuesday #43 this month, discussing Plan Operators. I put them together and made the following clickable plan. It’s 1000px wide, so I hope you have a monitor wide enough. Let me explain this plan for you (people’s names are the links to the articles on their blogs – the same links as in the plan above). It was clearly a SELECT statement. Wayne Sheffield (@dbawayne) wrote about that, so we start with a SELECT physical operator, leveraging the logical operator Wayne Sheffield. The SELECT operator calls the Paul White operator, discussed by Jason Brimhall (@sqlrnnr) in his post. The Paul White operator is quite remarkable, and can consume three streams of data. Let’s look at those streams. The first pulls data from a Table Scan – Boris Hristov (@borishristov)’s post – using parallel threads (Bradley Ball – @sqlballs) that pull the data eagerly through a Table Spool (Oliver Asmus – @oliverasmus). A scalar operation is also performed on it, thanks to Jeffrey Verheul (@devjef)’s Compute Scalar operator. The second stream of data applies Evil (I figured that must mean a procedural TVF, but could’ve been anything), courtesy of Jason Strate (@stratesql). It performs this Evil on the merging of parallel streams (Steve Jones – @way0utwest), which suck data out of a Switch (Paul White – @sql_kiwi). This Switch operator is consuming data from up to four lookups, thanks to Kalen Delaney (@sqlqueen), Rick Krueger (@dataogre), Mickey Stuewe (@sqlmickey) and Kathi Kellenberger (@auntkathi). Unfortunately Kathi’s name is a bit long and has been truncated, just like in real plans. The last stream performs a join of two others via a Nested Loop (Matan Yungman – @matanyungman). One pulls data from a Spool (my post – @rob_farley) populated from a Table Scan (Jon Morisi). The other applies a catchall operator (the catchall is because Tamera Clark (@tameraclark) didn’t specify any particular operator, and a catchall is what gets shown when SSMS doesn’t know what to show. Surprisingly, it’s showing the yellow one, which is about cursors. Hopefully that’s not what Tamera planned, but anyway...) to the output from an Index Seek operator (Sebastian Meine – @sqlity). Lastly, I think everyone put in 110% effort, so that’s what all the operators cost. That didn’t leave anything for me, unfortunately, but that’s okay. Also, because he decided to use the Paul White operator, Jason Brimhall gets 0%, and his 110% was given to Paul’s Switch operator post. I hope you’ve enjoyed this T-SQL Tuesday, and have learned something extra about Plan Operators. Keep your eye out for next month’s one by watching the Twitter Hashtag #tsql2sday, and why not contribute a post to the party? Big thanks to Adam Machanic as usual for starting all this. @rob_farley

    Read the article

  • Raycasting tutorial / vector math question

    - by mattboy
    I'm checking out this nice raycasting tutorial at http://lodev.org/cgtutor/raycasting.html and have a probably very simple math question. In the DDA algorithm I'm having trouble understanding the calcuation of the deltaDistX and deltaDistY variables, which are the distances that the ray has to travel from 1 x-side to the next x-side, or from 1 y-side to the next y-side, in the square grid that makes up the world map (see below screenshot). In the tutorial they are calculated as follows, but without much explanation: //length of ray from one x or y-side to next x or y-side double deltaDistX = sqrt(1 + (rayDirY * rayDirY) / (rayDirX * rayDirX)); double deltaDistY = sqrt(1 + (rayDirX * rayDirX) / (rayDirY * rayDirY)); rayDirY and rayDirX are the direction of a ray that has been cast. How do you get these formulas? It looks like pythagorean theorem is part of it, but somehow there's division involved here. Can anyone clue me in as to what mathematical knowledge I'm missing here, or "prove" the formula by showing how it's derived?

    Read the article

  • C#: LINQ vs foreach - Round 1.

    - by James Michael Hare
    So I was reading Peter Kellner's blog entry on Resharper 5.0 and its LINQ refactoring and thought that was very cool.  But that raised a point I had always been curious about in my head -- which is a better choice: manual foreach loops or LINQ?    The answer is not really clear-cut.  There are two sides to any code cost arguments: performance and maintainability.  The first of these is obvious and quantifiable.  Given any two pieces of code that perform the same function, you can run them side-by-side and see which piece of code performs better.   Unfortunately, this is not always a good measure.  Well written assembly language outperforms well written C++ code, but you lose a lot in maintainability which creates a big techncial debt load that is hard to offset as the application ages.  In contrast, higher level constructs make the code more brief and easier to understand, hence reducing technical cost.   Now, obviously in this case we're not talking two separate languages, we're comparing doing something manually in the language versus using a higher-order set of IEnumerable extensions that are in the System.Linq library.   Well, before we discuss any further, let's look at some sample code and the numbers.  First, let's take a look at the for loop and the LINQ expression.  This is just a simple find comparison:       // find implemented via LINQ     public static bool FindViaLinq(IEnumerable<int> list, int target)     {         return list.Any(item => item == target);     }         // find implemented via standard iteration     public static bool FindViaIteration(IEnumerable<int> list, int target)     {         foreach (var i in list)         {             if (i == target)             {                 return true;             }         }           return false;     }   Okay, looking at this from a maintainability point of view, the Linq expression is definitely more concise (8 lines down to 1) and is very readable in intention.  You don't have to actually analyze the behavior of the loop to determine what it's doing.   So let's take a look at performance metrics from 100,000 iterations of these methods on a List<int> of varying sizes filled with random data.  For this test, we fill a target array with 100,000 random integers and then run the exact same pseudo-random targets through both searches.                       List<T> On 100,000 Iterations     Method      Size     Total (ms)  Per Iteration (ms)  % Slower     Any         10       26          0.00046             30.00%     Iteration   10       20          0.00023             -     Any         100      116         0.00201             18.37%     Iteration   100      98          0.00118             -     Any         1000     1058        0.01853             16.78%     Iteration   1000     906         0.01155             -     Any         10,000   10,383      0.18189             17.41%     Iteration   10,000   8843        0.11362             -     Any         100,000  104,004     1.8297              18.27%     Iteration   100,000  87,941      1.13163             -   The LINQ expression is running about 17% slower for average size collections and worse for smaller collections.  Presumably, this is due to the overhead of the state machine used to track the iterators for the yield returns in the LINQ expressions, which seems about right in a tight loop such as this.   So what about other LINQ expressions?  After all, Any() is one of the more trivial ones.  I decided to try the TakeWhile() algorithm using a Count() to get the position stopped like the sample Pete was using in his blog that Resharper refactored for him into LINQ:       // Linq form     public static int GetTargetPosition1(IEnumerable<int> list, int target)     {         return list.TakeWhile(item => item != target).Count();     }       // traditionally iterative form     public static int GetTargetPosition2(IEnumerable<int> list, int target)     {         int count = 0;           foreach (var i in list)         {             if(i == target)             {                 break;             }               ++count;         }           return count;     }   Once again, the LINQ expression is much shorter, easier to read, and should be easier to maintain over time, reducing the cost of technical debt.  So I ran these through the same test data:                       List<T> On 100,000 Iterations     Method      Size     Total (ms)  Per Iteration (ms)  % Slower     TakeWhile   10       41          0.00041             128%     Iteration   10       18          0.00018             -     TakeWhile   100      171         0.00171             88%     Iteration   100      91          0.00091             -     TakeWhile   1000     1604        0.01604             94%     Iteration   1000     825         0.00825             -     TakeWhile   10,000   15765       0.15765             92%     Iteration   10,000   8204        0.08204             -     TakeWhile   100,000  156950      1.5695              92%     Iteration   100,000  81635       0.81635             -     Wow!  I expected some overhead due to the state machines iterators produce, but 90% slower?  That seems a little heavy to me.  So then I thought, well, what if TakeWhile() is not the right tool for the job?  The problem is TakeWhile returns each item for processing using yield return, whereas our for-loop really doesn't care about the item beyond using it as a stop condition to evaluate. So what if that back and forth with the iterator state machine is the problem?  Well, we can quickly create an (albeit ugly) lambda that uses the Any() along with a count in a closure (if a LINQ guru knows a better way PLEASE let me know!), after all , this is more consistent with what we're trying to do, we're trying to find the first occurence of an item and halt once we find it, we just happen to be counting on the way.  This mostly matches Any().       // a new method that uses linq but evaluates the count in a closure.     public static int TakeWhileViaLinq2(IEnumerable<int> list, int target)     {         int count = 0;         list.Any(item =>             {                 if(item == target)                 {                     return true;                 }                   ++count;                 return false;             });         return count;     }     Now how does this one compare?                         List<T> On 100,000 Iterations     Method         Size     Total (ms)  Per Iteration (ms)  % Slower     TakeWhile      10       41          0.00041             128%     Any w/Closure  10       23          0.00023             28%     Iteration      10       18          0.00018             -     TakeWhile      100      171         0.00171             88%     Any w/Closure  100      116         0.00116             27%     Iteration      100      91          0.00091             -     TakeWhile      1000     1604        0.01604             94%     Any w/Closure  1000     1101        0.01101             33%     Iteration      1000     825         0.00825             -     TakeWhile      10,000   15765       0.15765             92%     Any w/Closure  10,000   10802       0.10802             32%     Iteration      10,000   8204        0.08204             -     TakeWhile      100,000  156950      1.5695              92%     Any w/Closure  100,000  108378      1.08378             33%     Iteration      100,000  81635       0.81635             -     Much better!  It seems that the overhead of TakeAny() returning each item and updating the state in the state machine is drastically reduced by using Any() since Any() iterates forward until it finds the value we're looking for -- for the task we're attempting to do.   So the lesson there is, make sure when you use a LINQ expression you're choosing the best expression for the job, because if you're doing more work than you really need, you'll have a slower algorithm.  But this is true of any choice of algorithm or collection in general.     Even with the Any() with the count in the closure it is still about 30% slower, but let's consider that angle carefully.  For a list of 100,000 items, it was the difference between 1.01 ms and 0.82 ms roughly in a List<T>.  That's really not that bad at all in the grand scheme of things.  Even running at 90% slower with TakeWhile(), for the vast majority of my projects, an extra millisecond to save potential errors in the long term and improve maintainability is a small price to pay.  And if your typical list is 1000 items or less we're talking only microseconds worth of difference.   It's like they say: 90% of your performance bottlenecks are in 2% of your code, so over-optimizing almost never pays off.  So personally, I'll take the LINQ expression wherever I can because they will be easier to read and maintain (thus reducing technical debt) and I can rely on Microsoft's development to have coded and unit tested those algorithm fully for me instead of relying on a developer to code the loop logic correctly.   If something's 90% slower, yes, it's worth keeping in mind, but it's really not until you start get magnitudes-of-order slower (10x, 100x, 1000x) that alarm bells should really go off.  And if I ever do need that last millisecond of performance?  Well then I'll optimize JUST THAT problem spot.  To me it's worth it for the readability, speed-to-market, and maintainability.

    Read the article

  • Third-Grade Math Class

    - by andyleonard
    An Odd Thing Happened... ... when I was in third grade math class: I was handed a sheet of arithmetic problems to solve. There were maybe 20 problems on the page and we were given the remainder of the class to complete them. I don't remember how much time remained in the class, I remember I finished working on the problems before my classmates. That wasn't the odd part. The odd part was that I started working on the first problem, concentrating pretty hard. I worked the sum and moved to the next...(read more)

    Read the article

  • OAGi Architecture Council OAGIS Ten Work Group Completes first round review of Concepts for OAGIS Te

    - by michael.rowell
    Today the OAGi Architecture Council OAGIS Ten Work group completed the first level review of concepts for existing content for OAGIS Ten. This is one of the first milestones for OAGIS Ten. In doing this the concepts of key objects (the Nouns) have been identified along with the key context for their use. While OAGIS Ten remains a work-in-process the work group shows progress. Going forward the other councils will provide additional input to these and there own concepts and the contexts for each. Additionally, sub groups will focus on concepts for given domains. Stay tuned for future progress. If anyone is interested in joining the effort. OAGi membership is open to anyone, please see the OAGi Web site.

    Read the article

  • How does a website like Mathway work?

    - by Bob
    I recently found a website called Mathway Basically, it works by allowing you to choose your "level of math" (which it uses to determine what tools it should provide to you) and then allows you to input a math problem which it then solves for you, and gives you detailed solutions (you have to try it, it's really cool). I was wondering how it worked on two levels. First off, how would they parse the math problem (and all the sometimes foreign mathematical operators)? How do they get from text to numbers, variables, and operators? Second, how do they generate the explanations? While you have to pay for the detailed solutions (which are explanations of how they solved the problem), I've seen their preview screenshots, and it looks very detailed. The explanations are given in full, accurate sentences. How would they generate something like that?

    Read the article

  • Conferences: Starting the round for 2011 with Mix

    - by Enrique Lima
    There are several conferences lining up for 2011.  There are some private conferences I will be participating in and some other where there is an invitation to submit content for consideration.  That is the case with Mix 2011. The date:  April 12-14, 2011 The venue: Mandalay Bay, Las Vegas Here is the general information: http://live.visitmix.com/ To submit content: http://live.visitmix.com/opencall

    Read the article

  • Round-up: Embedded Java posts and videos

    - by terrencebarr
    I’ve been collecting links to some interesting blog posts and videos related to embedded Java over the last couple of weeks. Passing  these on here: Freescale blog – The Embedded Beat: “Let’s make it real – Internet of Things” Simon Ritter’s blog: “Mind Reading with Raspberry Pi” NightHacking with Steve Chin and Terrence Barr: “Java in the Internet of Things” NightHacking with Steve Chin and Alderan Robotics: “The NAO Robot” Java Magazine: “Getting Started with Java SE for embedded devices on Raspberry Pi” OTN video interview: “Java at ARM TechCon” OPN Techtalk with MX Entertainment: “Using Java and MX’s GrinXML Framework to build Blu-ray Disc and media applications” Oracle PartnerNetwork Blog: “M2M Architecture: Machine to Machine – The Internet of Things – It’s all about the Data” YouTube Java Channel: “Understanding the JVM and Low Latency Applications” Cheers, – Terrence Filed under: Mobile & Embedded Tagged: blog, iot, Java, Java Embedded, Raspberry Pi, video

    Read the article

  • Round-up: Embedded Java posts and videos

    - by terrencebarr
    I’ve been collecting links to some interesting blog posts and videos related to embedded Java over the last couple of weeks. Passing  these on here: Freescale blog – The Embedded Beat: “Let’s make it real – Internet of Things” Simon Ritter’s blog: “Mind Reading with Raspberry Pi” NightHacking with Steve Chin and Terrence Barr: “Java in the Internet of Things” NightHacking with Steve Chin and Alderan Robotics: “The NAO Robot” Java Magazine: “Getting Started with Java SE for embedded devices on Raspberry Pi” OTN video interview: “Java at ARM TechCon” OPN Techtalk with MX Entertainment: “Using Java and MX’s GrinXML Framework to build Blu-ray Disc and media applications” Oracle PartnerNetwork Blog: “M2M Architecture: Machine to Machine – The Internet of Things – It’s all about the Data” YouTube Java Channel: “Understanding the JVM and Low Latency Applications” Cheers, – Terrence Filed under: Mobile & Embedded Tagged: blog, iot, Java, Java Embedded, Raspberry Pi, video

    Read the article

  • October 2013 Oracle University Round-Up: New Training & Certifications

    - by Breanne Cooley
    Here are the highlights of what is happening this month at Oracle University.  New Technology Overview Courses: Cloud, Big Data and Security Learn about the latest technology solutions that can transform your business. These three Training On Demand courses are taught by industry experts. These courses help you develop an understanding of how Oracle technologies can make a positive impact on your organization.  Oracle Cloud Overview  Oracle Big Data Overview Oracle Security Overview  New Cloud Application Foundation Courses Check out our brand new 12c courses for WebLogic Server administrators and Coherence developers:  Oracle WebLogic Server 12c: Administration I Oracle WebLogic Server 12c: Administration II Oracle Coherence 12c: New Features  Oracle Database 12c Courses Our Oracle Database 12c training is becoming very popular. Here are this month's featured courses:  Oracle Database 12c: New Features for Administrators Oracle Database 12c: Administration Workshop  Oracle Database 12c: Install and Upgrade Workshop Oracle Database 12c: Admin, Install and Upgrade Accelerated  Validate your expertise and add value by earning an Oracle Database 12c Certification.  New Certifications for MySQL Watch our two new videos to find out what's new with Oracle MySQL Certifications. 1) Oracle MySQL 5.6 Certification: What's New for Database Administrators  Recommended training:  MySQL for Beginners MySQL for Database Administrators  2) Oracle MySQL 5.6 Certification: What's New for Developers Recommended training:  MySQL for Beginners MySQL for Developers New Training & Certification for Oracle Applications JD Edwards 9.1 Training Additional JD Edwards Enterprise One 9.1 training is now available for administrators, developers and implementation team members. Cross Application Training  JD Edwards Enterprise One Common Foundation Rel 9.x  Human Capital Management Training  JD Edwards EnterpriseOne Payroll for Canada Rel 9.x JD Edwards EnterpriseOne Payroll for US Rel 9.x JD Edwards EnterpriseOne Payroll Accelerated for Canada Rel 9.x JD Edwards EnterpriseOne Payroll Accelerated for US Rel 9.x  Financial  Management Training  JD Edwards EnterpriseOne Accounts Receivable Rel 9.x JD Edwards EnterpriseOne Financial Report Writing Rel 9.x  Knowledge Management 8.5 Training Oracle Knowledge 8.5 training is now available for analysts interested in learning how to quickly spot trends in content processing and system usage with analytics dashboards. Knowledge Analytics Rel 8.5  Taleo Training Updated Taleo training is now available. Taleo Business Edition (TEE) business users can learn how to create more efficient reports. Recruiters will learn how to efficiently and effectively use Taleo Business Edition (TBE) Recruit.  Taleo (TEE): Advanced Reporting Taleo (TBE): Recruit - End User Fundamentals  New Training for Oracle Retail 13.4.1 Updated training for Retail Predictive Application Server and Retail Demand Forecasting is now available.  RPAS Administration and Configuration Fundamentals RPAS Technical Essentials: Fusion Client 13.4.1 Retail Demand Forecasting (RDF) Business Essentials 13.4.1  View all available training courses, learning paths and certifications at education.oracle.com, or contact your local education representative to learn more about Oracle University's education solutions. See you in class!  -Oracle University Marketing Team 

    Read the article

  • Multilevel Queue Scheduling (MQS) with Round Robin

    - by stackuser
    I'm trying to use MQS to create a Gantt chart of 5 processes (P1-P5) as well as their waiting, response, and turnaround times (and averages of those metrics) within a CPU task schedule. Here's the basic table of arrival times and bursts: Here's my actual work version after ticking off the finished processes. The time quantum for each time slice is (2 queues) TQ1=4 and TQ2=3. Note that I'm doing MQS and NOT MLFQ: It just doesn't feel like I'm doing MQS right here, I know this gets a little complex but maybe someone can point out where I'm going totally wrong.

    Read the article

  • Rotate a vector by given degrees (errors when value over 90)

    - by Ivan
    I created a function to rotate a vector by a given number of degrees. It seems to work fine when given values in the range -90 to +90. Beyond this, the amount of rotation decreases, i.e., I think objects are rotating the same amount for 80 and 100 degrees. I think this diagram might be a clue to my problem, but I don't quite understand what it's showing. Must I use a different trig function depending on the radians value? The programming examples I've been able to find look similar to mine (not varying the trig functions). Vector2D.prototype.rotate = function(angleDegrees) { var radians = angleDegrees * (Math.PI / 180); var ca = Math.cos(radians); var sa = Math.sin(radians); var rx = this.x*ca - this.y*sa; var ry = this.x*sa + this.y*ca; this.x = rx; this.y = ry; };

    Read the article

  • B2B Commerce Best Practice Round Table

    - by Jeri Kelley
    Are you struggling with delivering customers a consistent B2B multi-channel commerce experience? If yes, then you will want to join us for a panel discussion featuring Oracle customers and B2B commerce experts on Thursday, September 27th to learn how leading B2B companies are succeeding in the new age of commerce. Topics of discussion will include: Moving B2B data and content online Multiple site management Mobile platforms Merchandising and personalization Don’t miss this opportunity to learn more about the latest trends, challenges and successes in B2B multi-channel commerce. Learn more and register!

    Read the article

  • B2B Commerce Best Practice Round Table

    - by Jeri Kelley
    Are you struggling with delivering customers a consistent B2B multi-channel commerce experience? If yes, then you will want to join us for a panel discussion featuring Oracle customers and B2B commerce experts on Thursday, September 27th to learn how leading B2B companies are succeeding in the new age of commerce. Topics of discussion will include: Moving B2B data and content online Multiple site management Mobile platforms Merchandising and personalization Don’t miss this opportunity to learn more about the latest trends, challenges and successes in B2B multi-channel commerce. Learn more and register!

    Read the article

  • T-SQL Tuesday #006 Round-up!

    - by Mike C
    T-SQL Tuesday this month was all about LOB (large object) data. Thanks to all the great bloggers out there who participated! The participants this month posted some very impressive articles with information running the gamut from Reporting Services to SQL Server spatial data types to BLOB-handling in SSIS. One thing I noticed immediately was a trend toward articles about spatial data (SQL Server 2008 Geography and Geometry data types, a very fun topic to explore if you haven’t played around with...(read more)

    Read the article

  • There are 2 jobs available - which one sounds better all round [closed]

    - by Steve Gates
    I am currently employed at a company where we scrape by each year breaking even, sometimes having a little profit. The development environment is very relaxed and we have a laugh. My colleagues are not interested in improving their knowledge unless they have to, so trying to get them to adopt things like TDD is a non-starter. My development manager is stuck in .Net 2 land and refuses to use things like LINQ. He over complicates architecture and writes very unreadable code, heres an example SortedList<int,<SortedList<int,SortedList<int, MyClass>>>> The MD of the company has no drive and lets the one sales guy bring in the contracts. We are not busy all the time and this allows me time to look at new technology and learn. In terms of using things like TDD, my development manager has no problem with it and can kind of see the purpose of it, he just wont use it himself. This means I am alone in learning new things and am often resorting to StackOverflow to make sure I get things right. The company has a lot of flexibility, I can work from home if needs be and when my daughter was born they let me work from home 1 day a week however they expect this flexibility in return often asking me to travel occasionally on a Friday afternoon for the following week. Sometimes its abroad. We are also pretty much on call 24/5 as we have engineers in various countries. Also we have no testers so most of the testing is done by us developers and some testing by engineers. Either way no-one likes testing! I have been offered a role at a company I worked at 5 years ago. They were quite Victorian in their working practices but it appears to have relaxed now although I suspect still reasonably formal. There is a new team of developers I don't know and they are about to move to new offices. The team lead is a guy that was there when I was and I get the impression he takes his role seriously and likes his formal procedures and documentation. I think some of the Victorian practices may have rubbed off on him. However he did say if things crop up then as long as I can trust the person they can work at home although he prefers people in the office. The team uses SCRUM, TDD and SOLID design principles so they are quite up to date in technology. They are reasonably Microsoft focused. It appears the Technical Director might be the R&D man and research new technology on his own not allowing developers to play with new technology. He possibly might be a super developer and makes all the decisions that no can argue with. They are currently moving to Entity Framework away from NHibernate based on issues that their queries seem to fail sometimes and they feel NHibernate is stagnant. They have analysts and a QA team. The MD is focused and they are an expanding company making profit each year. I'm not sure what the team morale is and whether they have a laugh. When I had a tour around the office they were there in dead silence. I'm really unsure which role is the best for me and going with my gut instinct is useless as I'm not sure what my gut is telling me. Based on the information above which role would you choose and why?

    Read the article

  • Week in Geek: Google Announces New Round of Services to be Shut Down

    - by Asian Angel
    Our latest edition of WIG is filled with news link coverage on topics such as an IE flaw allows attackers and advertisers to track cursor movement, Microsoft will retire its Live Mesh PC-sync service in February, Yahoo has revamped its e-mail service & continues overhaul on Flickr, and more. Secure Yourself by Using Two-Step Verification on These 16 Web Services How to Fix a Stuck Pixel on an LCD Monitor How to Factory Reset Your Android Phone or Tablet When It Won’t Boot

    Read the article

  • Blogging Round the World

    It seems that once or twice a week, I run across an Android-developer-oriented site that I hadn’t previously noticed. There are already a few aggregators and directories, and...

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >