Search Results

Search found 9675 results on 387 pages for 'i still play and love oblivion'.

Page 278/387 | < Previous Page | 274 275 276 277 278 279 280 281 282 283 284 285  | Next Page >

  • Best Platform/Engine for turn based Client/Server Android game

    - by Paradine
    I'm currently designing a turn based game for tablets. Initially for Android with porting to iOS later considered in design. I'm having trouble narrowing down the available technologies to even know where to spend my research time. I am hoping that if I explain what I am trying to achieve someone may be able to suggest a platform and/or engine. I've looked into some of the open source Engines ( http://www.cuteandroid.com/ten-open-source-android-2d-or-3d-game-engine-for-android-developers ) and some appear to handle much of what I might require - although with a higher focus on graphics than i need. Mages looks interesting although development appears to have ceased. If I could somehow leverage GoogleApps that would be excellent. Here is what I am trying to achieve: PvP turn based strategy game over internet - minimal animation and bandwidth required Players match up online using MetaGame system MatchID created on Resolution Server and Game starts Clients have 30 second countdown to select MoveString Clients sends small secure timestamped and MatchIDed MoveString to Resolution server Resolution server looks up Move String for each player, Resolves and Updates Players status in MatchID on Server Resolution server updates Client Views Repeat until victory conditions met - MatchID Closed, Rewards earned in MetaGame There will also need to be a full social and account system and metagame backend - but this could be running on separate system(s) Tablet in Offline mode would be catalog browsing and perhaps single player AI - bum I'm focusing on the Resolution Server at this point I'm not even certain if I would be looking at an Android App or a WebApp at this stage! I want a custom GUI so I guess an app - but maybe as I have little animation a WebApp might also work. Probably some combination of both. There will be very small overhead in data between client server - essentially a small text string every 30 seconds sent to the Resolution server which looks up the Effect and applies it to the Opponents string and determines some results to apply to the match. The client view is updated minimally with the results (only 5 in game Integers tracked) - perhaps triggering small animations/popups on the client to show the end result. e.g Explosion. If you have suggestions for a good technology or platform to best achieving the Resolution Server I'd love to hear. Also if you have experience with open source Engines - and could narrow down which (if any ) might be most suitable that would be a big help. Thanks in advance

    Read the article

  • Separating physics and game logic from UI code

    - by futlib
    I'm working on a simple block-based puzzle game. The game play consists pretty much of moving blocks around in the game area, so it's a trivial physics simulation. My implementation, however, is in my opinion far from ideal and I'm wondering if you can give me any pointers on how to do it better. I've split the code up into two areas: Game logic and UI, as I did with a lot of puzzle games: The game logic is responsible for the general rules of the game (e.g. the formal rule system in chess) The UI displays the game area and pieces (e.g. chess board and pieces) and is responsible for animations (e.g. animated movement of chess pieces) The game logic represents the game state as a logical grid, where each unit is one cell's width/height on the grid. So for a grid of width 6, you can move a block of width 2 four times until it collides with the boundary. The UI takes this grid, and draws it by converting logical sizes into pixel sizes (that is, multiplies it by a constant). However, since the game has hardly any game logic, my game logic layer [1] doesn't have much to do except collision detection. Here's how it works: Player starts to drag a piece UI asks game logic for the legal movement area of that piece and lets the player drag it within that area Player lets go of a piece UI snaps the piece to the grid (so that it is at a valid logical position) UI tells game logic the new logical position (via mutator methods, which I'd rather avoid) I'm not quite happy with that: I'm writing unit tests for my game logic layer, but not the UI, and it turned out all the tricky code is in the UI: Stopping the piece from colliding with others or the boundary and snapping it to the grid. I don't like the fact that the UI tells the game logic about the new state, I would rather have it call a movePieceLeft() method or something like that, as in my other games, but I didn't get far with that approach, because the game logic knows nothing about the dragging and snapping that's possible in the UI. I think the best thing to do would be to get rid of my game logic layer and implement a physics layer instead. I've got a few questions regarding that: Is such a physics layer common, or is it more typical to have the game logic layer do this? Would the snapping to grid and piece dragging code belong to the UI or the physics layer? Would such a physics layer typically work with pixel sizes or with some kind of logical unit, like my game logic layer? I've seen event-based collision detection in a game's code base once, that is, the player would just drag the piece, the UI would render that obediently and notify the physics system, and the physics system would call a onCollision() method on the piece once a collision is detected. What is more common? This approach or asking for the legal movement area first? [1] layer is probably not the right word for what I mean, but subsystem sounds overblown and class is misguiding, because each layer can consist of several classes.

    Read the article

  • Scrum and Team Consolidation

    - by John K. Hines
    I’m still working my way through one of the more painful team consolidations of my career.  One thing that’s made it hard was my assumption that the use of Agile methods and Scrum would make everything easy.  Take three teams, make all work visible, track it, and presto: An efficient, functioning software development team. What I’ve come to realize is that the primary benefit of Scrum is that Scrum brings teams closer to their customers.  Frequent meetings, short iterations, and phased deployments are all meant to keep the customer in the loop.  It’s true that as teams become proficient with Scrum they tend to become more efficient.  But I don’t think it’s true that Scrum automatically helps people work together. Instead, Scrum can point out when teams aren’t good at working together.   And it really illustrates when teams, especially teams in sustaining mode, are reacting to their customers instead of innovating with them.  At the moment we’ve inherited a huge backlog of tools, processes, and personalities.  It’s up to us to sort them all out.  Unfortunately, after 7 &frac12; months we’re still sorting. What I’d recommend for any blended team is to look at your current product lifecycles and work on a single lifecycle for all work.  If you can’t objectively come up with one process, that’s a good indication that the new team might not be a good fit for being a single unit (which happens all the time in bigger companies).  Go ahead & self-organize into sub-teams.  Then repeat the process. If you can come up with a single process, tackle each piece and standardize all of them.  Do this as soon as possible, as it can be uncomfortable.  Standardize your requirements gathering and tracking, your exploration and technical analysis, your project planning, development standards, validation and sustaining processes.  Standardize all of it.  Make this your top priority, get it out of the way, and get back to work. Lastly, managers of blended teams should realize what I’m suggesting is a disruptive process.  But you’ve just reorganized the team is already disrupted.   Don’t pull the bandage off slowly and force the team through a prolonged transition phase, lowering their productivity over the long term.  You can role model leadership to your team and drive a true consolidation.  Destroy roadblocks, reassure those on your team who are afraid of change, and push forward to create something efficient and beautiful.  Then use Scrum to reengage your customers in a way that they’ll love. Technorati tags: Scrum Scrum Process

    Read the article

  • Achieving Zero Downtime Deployment

    - by MattW
    I am trying to achieve zero downtime deployments so I can deploy less during off hours and more during "slower" hours - or anytime, in theory. My current setup, somewhat simplified: Web Server A (.NET App) Web Server B (.NET App) Database Server (SQL Server) My current deployment process: "Stop" the sites on both Web Server A and B Upgrade the database schema for the version of the app being deployed Update Web Server A Update Web Server B Bring everything back online Current Problem This leads to a small amount of downtime each month - about 30 mins. I do this during off hours, so it isn't a huge problem - but it is something I'd like to get away from. Also - there is no way to really go 'back'. I don't generally make rollback DB scripts - only upgrade scripts. Leveraging The Load Balancer I'd love to be able to upgrade one Web Server at a time. Take Web Server A out of the load balancer, upgrade it, put it back online, then repeat for Web Server B. The problem is the database. Each version of my software will need to execute against a different version of the database - so I am sort of "stuck". Possible Solution A current solution I am considering is adopting the following rules: Never delete a database table. Never delete a database column. Never rename a database column. Never reorder a column. Every stored procedure must be versioned. Meaning - 'spFindAllThings' will become 'spFindAllThings_2' when it is edited. Then it becomes 'spFindAllThings_3' when edited again. Same rule applies to views. While, this seems a bit extreme - I think it solves the problem. Each version of the application will be hitting the DB in a non breaking way. The code expects certain results from the views/stored procedures - and this keeps that 'contract' valid. The problem is - it just seeps sloppy. I know I can clean up old stored procedures after the app is deployed for awhile, but it just feels dirty. Also - it depends on all of the developers following these rule, which will mostly happen, but I imagine someone will make a mistake. Finally - My Question Is this sloppy or hacky? Is anybody else doing it this way? How are other people solving this problem?

    Read the article

  • JavaScript Sucks.

    - by Matt Watson
    JavaScript Sucks. Yes, I said it. Microsoft's announcement of TypeScript got me thinking today. Is this a step in the right direction? It sounds like it fixes a lot of problems with JavaScript development. But is it really just duct tape and super glue for a programming model that needs to be replaced?I have had a love hate relationship with JavaScript, like most developers who would prefer avoiding client side code. I started doing web development over 10 years ago and I have done some pretty cool stuff with JavaScript. It has came a long ways and is the universal standard these days for client side scripting in the web browser. Over the years the browsers have become much faster at processing JavaScript. Now people are even trying to use it on the server side via node.js. OK, so why do I think JavaScript sucks?Well first off, as an enterprise web application developer, I don't like any scripting or dynamic languages. I like code that compiles for lots of obvious reasons. It is messy to code with and lacks all kinds of modern programming features. We spend a lot of time trying to hack it to do things it was never really designed for.Ever try to use different jQuery based plugins that require conflicting jQuery versions? Yeah, that sucks.How about trying to figure out how to make 20 javascript include files load quicker as one request? Yeah that sucks too.Performance? Let me just point to the old Facebook mobile app made with JS & HTML5. It sucked. Enough said.How about unit testing JavaScript? I've never tried it, but it sure sounds like fun.My biggest problem with JavaScript is code security. If I make some awesome product, there is no way to protect my code. How can we expect game makers to write apps in 100% JavaScript and HTML5 if they can't protect their intellectual property?There are compiling tools like Closure, unit test frameworks, minify, coffee script, TypeScript and a bunch of other tools. But to me, they all try to make up for the weaknesses and problems with JavaScript. JavaScript is a mess and we spend a lot of time trying to work around all of it's problems. It is possible to program in Silverlight, Java or Flash and run that in the browser instead of JavaScript, but they all have their own problems and lack universal mobile support. I believe Microsoft's new TypeScript is a step forward for JavaScript, but I think we need to start planning to go a whole different direction. We need a new universal client side programming model, because JavaScript sucks.

    Read the article

  • Hadoop growing pains

    - by Piotr Rodak
    This post is not going to be about SQL Server. I have been reading recently more and more about “Big Data” – very catchy term that describes untamed increase of the data that mankind is producing each day and the struggle to capture the meaning of these data. Ten years ago, and perhaps even three years ago this need was not so recognized. Increasing number of smartphones and discernable trend of mainstream Internet traffic moving to the smartphone generated one means that there is bigger and bigger stream of information that has to be stored, transformed, analysed and perhaps monetized. The nature of this traffic makes if very difficult to wrap it into boundaries of relational database engines. The amount of data makes it near to impossible to process them in relational databases within reasonable time. This is where ‘cloud’ technologies come to play. I just read a good article about the growing pains of Hadoop, which became one of the leading players on distributed processing arena within last year or two. Toby Baer concludes in it that lack of enterprise ready toolsets hinders Hadoop’s apprehension in the enterprise world. While this is true, something else drew my attention. According to the article there are already about half of a dozen of commercially supported distributions of Hadoop. For me, who has not been involved into intricacies of open-source world, this is quite interesting observation. On one hand, it is good that there is competition as it is beneficial in the end to the customer. On the other hand, the customer is faced with difficulty of choosing the right distribution. In future, when Hadoop distributions fork even more, this choice will be even harder. The distributions will have overlapping sets of features, yet will be quite incompatible with each other. I suppose it will take a few years until leaders emerge and the market will begin to resemble what we see in Linux world. There are myriads of distributions, but only few are acknowledged by the industry as enterprise standard. Others are honed by bearded individuals with too much time to spend. In any way, the third fact I can’t help but notice about the proliferation of distributions of Hadoop is that IT professionals will have jobs.   BuzzNet Tags: Hadoop,Big Data,Enterprise IT

    Read the article

  • Dynamically loading Assemblies to reduce Runtime Depencies

    - by Rick Strahl
    I've been working on a request to the West Wind Application Configuration library to add JSON support. The config library is a very easy to use code-first approach to configuration: You create a class that holds the configuration data that inherits from a base configuration class, and then assign a persistence provider at runtime that determines where and how the configuration data is store. Currently the library supports .NET Configuration stores (web.config/app.config), XML files, SQL records and string storage.About once a week somebody asks me about JSON support and I've deflected this question for the longest time because frankly I think that JSON as a configuration store doesn't really buy a heck of a lot over XML. Both formats require the user to perform some fixup of the plain configuration data - in XML into XML tags, with JSON using JSON delimiters for properties and property formatting rules. Sure JSON is a little less verbose and maybe a little easier to read if you have hierarchical data, but overall the differences are pretty minor in my opinion. And yet - the requests keep rolling in.Hard Link Issues in a Component LibraryAnother reason I've been hesitant is that I really didn't want to pull in a dependency on an external JSON library - in this case JSON.NET - into the core library. If you're not using JSON.NET elsewhere I don't want a user to have to require a hard dependency on JSON.NET unless they want to use the JSON feature. JSON.NET is also sensitive to versions and doesn't play nice with multiple versions when hard linked. For example, when you have a reference to V4.4 in your project but the host application has a reference to version 4.5 you can run into assembly load problems. NuGet's Update-Package can solve some of this *if* you can recompile, but that's not ideal for a component that's supposed to be just plug and play. This is no criticism of JSON.NET - this really applies to any dependency that might change.  So hard linking the DLL can be problematic for a number reasons, but the primary reason is to not force loading of JSON.NET unless you actually need it when you use the JSON configuration features of the library.Enter Dynamic LoadingSo rather than adding an assembly reference to the project, I decided that it would be better to dynamically load the DLL at runtime and then use dynamic typing to access various classes. This allows me to run without a hard assembly reference and allows more flexibility with version number differences now and in the future.But there are also a couple of downsides:No assembly reference means only dynamic access - no compiler type checking or IntellisenseRequirement for the host application to have reference to JSON.NET or else get runtime errorsThe former is minor, but the latter can be problematic. Runtime errors are always painful, but in this case I'm willing to live with this. If you want to use JSON configuration settings JSON.NET needs to be loaded in the project. If this is a Web project, it'll likely be there already.So there are a few things that are needed to make this work:Dynamically create an instance and optionally attempt to load an Assembly (if not loaded)Load types into dynamic variablesUse Reflection for a few tasks like statics/enumsThe dynamic keyword in C# makes the formerly most difficult Reflection part - method calls and property assignments - fairly painless. But as cool as dynamic is it doesn't handle all aspects of Reflection. Specifically it doesn't deal with object activation, truly dynamic (string based) member activation or accessing of non instance members, so there's still a little bit of work left to do with Reflection.Dynamic Object InstantiationThe first step in getting the process rolling is to instantiate the type you need to work with. This might be a two step process - loading the instance from a string value, since we don't have a hard type reference and potentially having to load the assembly. Although the host project might have a reference to JSON.NET, that instance might have not been loaded yet since it hasn't been accessed yet. In ASP.NET this won't be a problem, since ASP.NET preloads all referenced assemblies on AppDomain startup, but in other executable project, assemblies are just in time loaded only when they are accessed.Instantiating a type is a two step process: Finding the type reference and then activating it. Here's the generic code out of my ReflectionUtils library I use for this:/// <summary> /// Creates an instance of a type based on a string. Assumes that the type's /// </summary> /// <param name="typeName">Common name of the type</param> /// <param name="args">Any constructor parameters</param> /// <returns></returns> public static object CreateInstanceFromString(string typeName, params object[] args) { object instance = null; Type type = null; try { type = GetTypeFromName(typeName); if (type == null) return null; instance = Activator.CreateInstance(type, args); } catch { return null; } return instance; } /// <summary> /// Helper routine that looks up a type name and tries to retrieve the /// full type reference in the actively executing assemblies. /// </summary> /// <param name="typeName"></param> /// <returns></returns> public static Type GetTypeFromName(string typeName) { Type type = null; // Let default name binding find it type = Type.GetType(typeName, false); if (type != null) return type; // look through assembly list var assemblies = AppDomain.CurrentDomain.GetAssemblies(); // try to find manually foreach (Assembly asm in assemblies) { type = asm.GetType(typeName, false); if (type != null) break; } return type; } To use this for loading JSON.NET I have a small factory function that instantiates JSON.NET and sets a bunch of configuration settings on the generated object. The startup code also looks for failure and tries loading up the assembly when it fails since that's the main reason the load would fail. Finally it also caches the loaded instance for reuse (according to James the JSON.NET instance is thread safe and quite a bit faster when cached). Here's what the factory function looks like in JsonSerializationUtils:/// <summary> /// Dynamically creates an instance of JSON.NET /// </summary> /// <param name="throwExceptions">If true throws exceptions otherwise returns null</param> /// <returns>Dynamic JsonSerializer instance</returns> public static dynamic CreateJsonNet(bool throwExceptions = true) { if (JsonNet != null) return JsonNet; lock (SyncLock) { if (JsonNet != null) return JsonNet; // Try to create instance dynamic json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); if (json == null) { try { var ass = AppDomain.CurrentDomain.Load("Newtonsoft.Json"); json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); } catch (Exception ex) { if (throwExceptions) throw; return null; } } if (json == null) return null; json.ReferenceLoopHandling = (dynamic) ReflectionUtils.GetStaticProperty("Newtonsoft.Json.ReferenceLoopHandling", "Ignore"); // Enums as strings in JSON dynamic enumConverter = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.Converters.StringEnumConverter"); json.Converters.Add(enumConverter); JsonNet = json; } return JsonNet; }This code's purpose is to return a fully configured JsonSerializer instance. As you can see the code tries to create an instance and when it fails tries to load the assembly, and then re-tries loading.Once the instance is loaded some configuration occurs on it. Specifically I set the ReferenceLoopHandling option to not blow up immediately when circular references are encountered. There are a host of other small config setting that might be useful to set, but the default seem to be good enough in recent versions. Note that I'm setting ReferenceLoopHandling which requires an Enum value to be set. There's no real easy way (short of using the cardinal numeric value) to set a property or pass parameters from static values or enums. This means I still need to use Reflection to make this work. I'm using the same ReflectionUtils class I previously used to handle this for me. The function looks up the type and then uses Type.InvokeMember() to read the static property.Another feature I need is have Enum values serialized as strings rather than numeric values which is the default. To do this I can use the StringEnumConverter to convert enums to strings by adding it to the Converters collection.As you can see there's still a bit of Reflection to be done even in C# 4+ with dynamic, but with a few helpers this process is relatively painless.Doing the actual JSON ConversionFinally I need to actually do my JSON conversions. For the Utility class I need serialization that works for both strings and files so I created four methods that handle these tasks two each for serialization and deserialization for string and file.Here's what the File Serialization looks like:/// <summary> /// Serializes an object instance to a JSON file. /// </summary> /// <param name="value">the value to serialize</param> /// <param name="fileName">Full path to the file to write out with JSON.</param> /// <param name="throwExceptions">Determines whether exceptions are thrown or false is returned</param> /// <param name="formatJsonOutput">if true pretty-formats the JSON with line breaks</param> /// <returns>true or false</returns> public static bool SerializeToFile(object value, string fileName, bool throwExceptions = false, bool formatJsonOutput = false) { dynamic writer = null; FileStream fs = null; try { Type type = value.GetType(); var json = CreateJsonNet(throwExceptions); if (json == null) return false; fs = new FileStream(fileName, FileMode.Create); var sw = new StreamWriter(fs, Encoding.UTF8); writer = Activator.CreateInstance(JsonTextWriterType, sw); if (formatJsonOutput) writer.Formatting = (dynamic)Enum.Parse(FormattingType, "Indented"); writer.QuoteChar = '"'; json.Serialize(writer, value); } catch (Exception ex) { Debug.WriteLine("JsonSerializer Serialize error: " + ex.Message); if (throwExceptions) throw; return false; } finally { if (writer != null) writer.Close(); if (fs != null) fs.Close(); } return true; }You can see more of the dynamic invocation in this code. First I grab the dynamic JsonSerializer instance using the CreateJsonNet() method shown earlier which returns a dynamic. I then create a JsonTextWriter and configure a couple of enum settings on it, and then call Serialize() on the serializer instance with the JsonTextWriter that writes the output to disk. Although this code is dynamic it's still fairly short and readable.For full circle operation here's the DeserializeFromFile() version:/// <summary> /// Deserializes an object from file and returns a reference. /// </summary> /// <param name="fileName">name of the file to serialize to</param> /// <param name="objectType">The Type of the object. Use typeof(yourobject class)</param> /// <param name="binarySerialization">determines whether we use Xml or Binary serialization</param> /// <param name="throwExceptions">determines whether failure will throw rather than return null on failure</param> /// <returns>Instance of the deserialized object or null. Must be cast to your object type</returns> public static object DeserializeFromFile(string fileName, Type objectType, bool throwExceptions = false) { dynamic json = CreateJsonNet(throwExceptions); if (json == null) return null; object result = null; dynamic reader = null; FileStream fs = null; try { fs = new FileStream(fileName, FileMode.Open, FileAccess.Read); var sr = new StreamReader(fs, Encoding.UTF8); reader = Activator.CreateInstance(JsonTextReaderType, sr); result = json.Deserialize(reader, objectType); reader.Close(); } catch (Exception ex) { Debug.WriteLine("JsonNetSerialization Deserialization Error: " + ex.Message); if (throwExceptions) throw; return null; } finally { if (reader != null) reader.Close(); if (fs != null) fs.Close(); } return result; }This code is a little more compact since there are no prettifying options to set. Here JsonTextReader is created dynamically and it receives the output from the Deserialize() operation on the serializer.You can take a look at the full JsonSerializationUtils.cs file on GitHub to see the rest of the operations, but the string operations are very similar - the code is fairly repetitive.These generic serialization utilities isolate the dynamic serialization logic that has to deal with the dynamic nature of JSON.NET, and any code that uses these functions is none the wiser that JSON.NET is dynamically loaded.Using the JsonSerializationUtils WrapperThe final consumer of the SerializationUtils wrapper is an actual ConfigurationProvider, that is responsible for handling reading and writing JSON values to and from files. The provider is simple a small wrapper around the SerializationUtils component and there's very little code to make this work now:The whole provider looks like this:/// <summary> /// Reads and Writes configuration settings in .NET config files and /// sections. Allows reading and writing to default or external files /// and specification of the configuration section that settings are /// applied to. /// </summary> public class JsonFileConfigurationProvider<TAppConfiguration> : ConfigurationProviderBase<TAppConfiguration> where TAppConfiguration: AppConfiguration, new() { /// <summary> /// Optional - the Configuration file where configuration settings are /// stored in. If not specified uses the default Configuration Manager /// and its default store. /// </summary> public string JsonConfigurationFile { get { return _JsonConfigurationFile; } set { _JsonConfigurationFile = value; } } private string _JsonConfigurationFile = string.Empty; public override bool Read(AppConfiguration config) { var newConfig = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfiguration)) as TAppConfiguration; if (newConfig == null) { if(Write(config)) return true; return false; } DecryptFields(newConfig); DataUtils.CopyObjectData(newConfig, config, "Provider,ErrorMessage"); return true; } /// <summary> /// Return /// </summary> /// <typeparam name="TAppConfig"></typeparam> /// <returns></returns> public override TAppConfig Read<TAppConfig>() { var result = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfig)) as TAppConfig; if (result != null) DecryptFields(result); return result; } /// <summary> /// Write configuration to XmlConfigurationFile location /// </summary> /// <param name="config"></param> /// <returns></returns> public override bool Write(AppConfiguration config) { EncryptFields(config); bool result = JsonSerializationUtils.SerializeToFile(config, JsonConfigurationFile,false,true); // Have to decrypt again to make sure the properties are readable afterwards DecryptFields(config); return result; } }This incidentally demonstrates how easy it is to create a new provider for the West Wind Application Configuration component. Simply implementing 3 methods will do in most cases.Note this code doesn't have any dynamic dependencies - all that's abstracted away in the JsonSerializationUtils(). From here on, serializing JSON is just a matter of calling the static methods on the SerializationUtils class.Already, there are several other places in some other tools where I use JSON serialization this is coming in very handy. With a couple of lines of code I was able to add JSON.NET support to an older AJAX library that I use replacing quite a bit of code that was previously in use. And for any other manual JSON operations (in a couple of apps I use JSON Serialization for 'blob' like document storage) this is also going to be handy.Performance?Some of you might be thinking that using dynamic and Reflection can't be good for performance. And you'd be right… In performing some informal testing it looks like the performance of the native code is nearly twice as fast as the dynamic code. Most of the slowness is attributable to type lookups. To test I created a native class that uses an actual reference to JSON.NET and performance was consistently around 85-90% faster with the referenced code. That being said though - I serialized 10,000 objects in 80ms vs. 45ms so this isn't hardly slouchy. For the configuration component speed is not that important because both read and write operations typically happen once on first access and then every once in a while. But for other operations - say a serializer trying to handle AJAX requests on a Web Server one would be well served to create a hard dependency.Dynamic Loading - Worth it?On occasion dynamic loading makes sense. But there's a price to be paid in added code complexity and a performance hit. But for some operations that are not pivotal to a component or application and only used under certain circumstances dynamic loading can be beneficial to avoid having to ship extra files and loading down distributions. These days when you create new projects in Visual Studio with 30 assemblies before you even add your own code, trying to keep file counts under control seems a good idea. It's not the kind of thing you do on a regular basis, but when needed it can be a useful tool. Hopefully some of you find this information useful…© Rick Strahl, West Wind Technologies, 2005-2013Posted in .NET  C#   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Choose the Text Editor Used to View Source Code in Internet Explorer

    - by Asian Angel
    Everyone has a favorite text editor that they like to use when viewing or working with source code. If you are unhappy with the default choice in Internet Explorer 8 then join us as we show you how to set up access to your favorite text editor. A Look at Before Here is Internet Explorer on our test system ready to help us view the source code for one of the pages here at the site. Perhaps “Notepad” is your default source code viewer… Or in the case of our test system where “EditPad Lite” was the default due to choices we made while installing it. Choose Your Favorite Text Editor Chances are you have your own personal favorite and want to make it the default source code viewer. To get started go to the “Tools Menu”  and click on “Developer Tools” or press “F12” to access the “Developer Tools Window”. Once you have the “Developer Tools Window” open go to the “File Menu”, then “Customize Internet Explorer View Source”, and click on “Other”. Once you have clicked on “Other” you will see the “Program Directory” for the current default app. Here you can see the “Program Files Folder” for “EditPad Lite”. To change the default app simply browse for the appropriate program folder. On our test system we decided to change the default to “Editra”. Once you have located the program that you want to use click on the “.exe” file for that app and click “Open”. Once you have clicked “Open”, all that is left for you to do is close the “Developer Tools Window”…everything else is already taken care of. And just like that you can be viewing source code with your favorite text editor. Conclusion If you have been unhappy with the default source code viewer in Internet Explorer 8 then you can set up access to your favorite text editor in just a couple of minutes. Nice, quick, and easy the way it ought to be. Thanks to HTG & TinyHacker reader Dwight for the tip! Similar Articles Productive Geek Tips View Webpage Source Code in Your Favorite Text Editor – FirefoxView Webpage Source Code in Tabs in FirefoxEasily View Source of Included Files in FirefoxRemove ISP Text or Corporate Branding from Internet Explorer Title BarRemove PartyPoker (Or Other Items) from the Internet Explorer Tools Menu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Automate Tasks in Linux with Crontab Discover New Bundled Feeds in Google Reader Play Music in Chrome by Simply Dragging a File 15 Great Illustrations by Chow Hon Lam Easily Sync Files & Folders with Friends & Family Amazon Free Kindle for PC Download

    Read the article

  • Freshen the RTS genre

    - by William Michael Thorley
    This isn't really a question, but a request for feedback. RPS (Rock, Paper, Scissors) RTS (Real Time Strategy) Demo version is out: The game is simple. It is an RTS. Why has it been made? Many if not most RTS’s are about economy and large numbers of unit types. The genre hasn’t actually developed the gameplay drastically from the very first RTS’s produced, some lesson have been learned, but the games are really very similar to how they have always been. RPS brings new gameplay to the RTS genre. Through three means: • New combat mechanics: RPS has two unique modes (as well as the old favourite) of resolving weapon fire. These change how combat happens, and make application of the correct units vital to success. From this comes the requirement to run Intel on your enemies. • Fixed Resource Economy: Each player has a fixed amount of energy, This means that there is a definite end to the game. You can attrition your enemy and try to outlast them, or try to outspend your opponent and destroy them. There is a limit to how fast ships can be built, through the generation of construction blocks, but energy is the fast limit on economy. • Game Modes: Game modes add victory conditions and new game pieces. The game is overseen by a controller which literally runs the game. Games are no longer line them up, gun them down. This means that new tactics must be played making skirmish games fresh with novel tactics without adding huge amounts of new game units to learn. I’ve produced RPS from the ground. I will be running a kickstarter in the near future, but right now I want feedback and input from the game developing community. Regarding the concepts, where RPS is going, the game modes, the combat mechanics. How it plays. RPS will give fresh gameplay to the genre so it must be right. It works over the internet or a LAN and supports single player games. Get it. Play it. Tell me about your games. Thank you Demo: https://dl.dropbox.com/u/51850113/RPS%20Playtest.zip Tutorials: https://dl.dropbox.com/u/51850113/RPSGamePlay.zip

    Read the article

  • TFS Rant *WARNING* negative opinions are being expressed.

    - by ryanabr
    It has happened several times now where I end up installing TFS "over the shoulder" of the system admin guy whose job it will be to "own" the server when I am gone. TFS is challenging enough to stand up when doing it myself on a completely open platform, but at these locations, networks are locked down, machines are locked down, and the unexpected always seems to pop up. I personally have the tolerance for these things as a software developer, but as we are installing I have to listen to all of 'colorful' remarks being made about: "why is it like this" or "this is a piece of crap". Generally the issues center around SharePoint integration. TFS on it's own is straightforward, but the last flavor in everyone's mouth is the SharePoint piece. As a product I like SharePoint, but installation is a nightmare. In this particular case, we are going to use WSS since the customer would like this separate from their corp SharePoint 2010 installations since there dev team is really small (1 developer) and it is being used as a VSS replacement, more than a full blown ALM tool. The server where it is being installed as a Cisco Security Agent on it that seems to block 'suspicious' activity, and as far as I can tell is preventing WSS from installing properly. The most confounding thing we can find no meaningful log entries to help diagnose the issue. it didn't help matters that when we tried to contact Microsoft for support, because we mentioned TFS in the list of things that we were trying to install, that after waiting 2 hours that we got a TFS support person NOT the SharePoint person that we really needed, so after another 2 hours the SharePoint support that we did get managed to corrupt the registry sufficiently with his 'tools' that we ended up starting over from scratch the next day anyway after going home at midnight. My point to this is: The System Administrator who is going to own this, now thinks it is a piece of crap because SharePoint wouldn't install properly. Perception is everything.  Everyone today is conditioned that software installs and works in a very simple matter. When looking at the different options to install TFS with the different "modes" there is inconsistency in the information being presented which leads to choices that causes headaches and this bad perception before the product is even installed. I am highlighting this because I love TFS as a product, but I HATE installing it, and would like it to install as simply and elegantly as the product operates once it is installed.

    Read the article

  • XNA extending an existing Content type

    - by Maarten
    We are doing a game in XNA that reacts to music. We need to do some offline processing of the music data and therefore we need a custom type containing the Song and some additional data: // Project AudioGameLibrary namespace AudioGameLibrary { public class GameTrack { public Song Song; public string Extra; } } We've added a Content Pipeline extension: // Project GameTrackProcessor namespace GameTrackProcessor { [ContentSerializerRuntimeType("AudioGameLibrary.GameTrack, AudioGameLibrary")] public class GameTrackContent { public SongContent SongContent; public string Extra; } [ContentProcessor(DisplayName = "GameTrack Processor")] public class GameTrackProcessor : ContentProcessor<AudioContent, GameTrackContent> { public GameTrackProcessor(){} public override GameTrackContent Process(AudioContent input, ContentProcessorContext context) { return new GameTrackContent() { SongContent = new SongProcessor().Process(input, context), Extra = "Some extra data" // Here we can do our processing on 'input' }; } } } Both the Library and the Pipeline extension are added to the Game Solution and references are also added. When trying to use this extension to load "gametrack.mp3" we run into problems however: // Project AudioGame protected override void LoadContent() { AudioGameLibrary.GameTrack gameTrack = Content.Load<AudioGameLibrary.GameTrack>("gametrack"); MediaPlayer.Play(gameTrack.Song); } The error message: Error loading "gametrack". File contains Microsoft.Xna.Framework.Media.Song but trying to load as AudioGameLibrary.GameTrack. AudioGame contains references to both AudioGameLibrary and GameTrackProcessor. Are we maybe missing other references? EDIT Selecting the correct content processor helped, it loads the audio file correctly. However, when I try to process some data, e.g: public override GameTrackContent Process(AudioContent input, ContentProcessorContext context) { int count = input.Data.Count; // With this commented out it works fine return new GameTrackContent() { SongContent = new SongProcessor().Process(input, context) }; } It crashes with the following error: Managed Debugging Assistant 'PInvokeStackImbalance' has detected a problem in 'C:\Users\Maarten\Documents\Visual Studio 2010\Projects\AudioGame\DebugPipeline\bin\Debug\DebugPipeline.exe'. Additional Information: A call to PInvoke function 'Microsoft.Xna.Framework.Content.Pipeline!Microsoft.Xna.Framework.Content.Pipeline.UnsafeNativeMethods+AudioHelper::OpenAudioFile' has unbalanced the stack. This is likely because the managed PInvoke signature does not match the unmanaged target signature. Check that the calling convention and parameters of the PInvoke signature match the target unmanaged signature. Information from logger right before crash: Using "BuildContent" task from assembly "Microsoft.Xna.Framework.Content.Pipel ine, Version=4.0.0.0, Culture=neutral, PublicKeyToken=842cf8be1de50553". Task "BuildContent" Building gametrack.mp3 -> bin\x86\Debug\Content\gametrack.xnb Rebuilding because asset is new Importing gametrack.mp3 with Microsoft.Xna.Framework.Content.Pipeline.Mp3Imp orter Im experiencing exactly this: http://forums.create.msdn.com/forums/t/75996.aspx

    Read the article

  • SQLAuthority News – I am Presenting 2 Sessions at TechEd India

    - by pinaldave
    TechED is the event which I am always excited about. It is one of the largest technology in India. Microsoft Tech Ed India 2011 is the premier technical education and networking event for tech professionals interested in learning, connecting and exploring a broad set of current and soon-to-be released Microsoft technologies, tools, platforms and services. I am going to speak at the TechED on two very interesting and advanced subjects. Venue: The LaLiT Ashok Kumara Krupa High Grounds Bangalore – 560001, Karnataka, India Sessions Date: March 25, 2011 Understanding SQL Server Behavioral Pattern – SQL Server Extended Events Date and Time: March 25, 2011 12:00 PM to 01:00 PM History repeats itself! SQL Server 2008 has introduced a very powerful, yet very minimal reoccurring feature called Extended Events. This advanced session will teach experienced administrators’ capabilities that were not possible before. From T-SQL error to CPU bottleneck, error login to deadlocks –Extended Event can detect it for you. Understanding the pattern of events can prevent future mistakes. SQL Server Waits and Queues – Your Gateway to Perf. Troubleshooting Date and Time: March 25, 2011 04:15 PM to 05:15 PM Just like a horoscope, SQL Server Waits and Queues can reveal your past, explain your present and predict your future. SQL Server Performance Tuning uses the Waits and Queues as a proven method to identify the best opportunities to improve performance. A glance at Wait Types can tell where there is a bottleneck. Learn how to identify bottlenecks and potential resolutions in this fast paced, advanced performance tuning session. My session will be on the third day of the event and I am very sure that everybody will be in groove to learn new interesting subjects. I will have few give-away during and at the end of the session. I will not tell you what I will have but it will be for sure something you will love to have. Please make a point and reserve above time slots to attend my session. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology Tagged: SQL Extended Events

    Read the article

  • Silverlight Cream for April 18, 2010 -- #840

    - by Dave Campbell
    In this Issue: CrocusGirl, Giorgetti Alessandro(-2-), smartyP, Pete Brown, David Poll, David Anson, and Bill Reiss. Shoutouts: Yasser Makram has a post up discussing Human Centered ALM with Telerik TeamPulse and Team Foundation Server. I saw this demo'd at DevConnnections and it definitely deserves a look. Shawn Wildermuth posted his materials from DevConnections all on one post: Back from DevConnections with SourceCode Shawn Wildermuth also posted an Updated RIA Services + MVVM Example Laurent Bugnion announced a Small change in MVVM Light Toolkit templates for Blend 4 RC Laurent Bugnion also announced Crowdsourcing MVVM Light Toolkit support The Expression Blend and Design Blog announced Expression Blend 4 Release Candidate Available! Dan Wahlin posted Slides and Code from my Silverlight MVVM Talk at DevConnections From SilverlightCream.com: Windows Phone 7 Design Notes – Part#1: Metro Resources CrocusGirl has blogged about WP7 and the Metro design concept. She has a bunch of resources up and information about Metro and the design methodology. Stay tuned for Part 2. Silverlight, M-V-VM ... and IoC - part 1 Giorgetti Alessandro has part 1 of a multi-parter up on IoC and MVVM for LOB apps in Silverlight ... a pretty quick into to MVVM. Silverlight, M-V-VM … and IoC – part 2 Giorgetti Alessandro also posted part 2 of his series, and this one digs deeper into the code and discusses what goes into the view and the model. Using the Facebook Developer Toolkit With Windows Phone 7 smartyP has a post addressing using the Facebook Developer toolkit with WP7... it took some hacking, and he explains it, and provides it for download. Silverlight and WPF Tip: Fitting items in a ListBox Having trouble fitting items into a Listbox in Silverlight or WPF without getting horizontal scrollbars? Pete Brown has a solution for you in 4 steps. Making printing easier in Silverlight 4 David Poll has a great detailed post up about printing in SL4, taking it to building a higher-level API that allows printing of collections... all demos and source included. Detailed information about the Silverlight Toolkit's new stacked series support David Anson details the improvements to Data Visualization in the Toolkit release from last week. Space Rocks game step 9: the asteroid sprite Bill Reiss has his latest game episode up and this time he's putting asteroid sprites in play. No placement, movement, or collisions yet, but it's a beginning. And, he's updated all his code to Silverlight 4. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • European e-government Action Plan all about interoperability

    - by trond-arne.undheim
    Yesterday, the European Commission released its European eGovernment Action Plan for 2011-2015. The plan includes measures on providing deeper user empowerment, enhancing the Internal Market, more efficiency and effectiveness of public administrations, and putting in place pre-conditions for developing e-government. The Good - Defines interoperability very clearly. Calls interoperability "a pre-condition for cross-border eGovernment services" (a very strong formulation) and says interoperability "is supported by open specifications". - Uses the terminology "open specifications" which, let's face it, is pretty close to "open standards" which is the term the rest of the world would use. - Confirms that Member States are fully committed to the political priorities of the Malmö Declaration (which was all about open standards) including the very strong action: by 2013: All Member States will have incorporated the political priorities of the Malmö Declaration in their national strategies. Such tight Action Plan integration between Commission and Member State priorities has seldom been attempted before, particularly not in a field where European legal competence is virtually non-existent. What we see now, is the subtle force of soft power rather than the rough force of regulation. In this case, it is the Member States who want Europe to take the lead. Very refreshing! Some quotes that show the commitment to interoperability and open specifications: "The emergence of innovative technologies such as "service-oriented architectures" (SOA), or "clouds" of services,  together with more open specifications which allow for greater sharing, re-use and interoperability reinforce the ability of ICT to play a key role in this quest for effficiency in the public sector." (p.4) "Interoperability is supported through open specifications" (p.13) 2.4.1. Open Specifications and Interoperability (p.13 has a whole section dedicated to this important topic. Open specifications and interoperability are nearly 100% interrelated): "Interoperability is the ability of systems and machines to exchange, process and correctly interpret information. It is more than just a technical challenge, as it also involves legal, organisational and semantic aspects of handling  data" (p.13) "standards and  open platforms offer opportunities for more cost-effective use of resources and delivery of services" (p.13). The Bad Shies away from defining open standards, or even open specifications, the EU's preferred term for the key enabler of interoperability. Verdict 90/100, a very respectable score.

    Read the article

  • Can you Trust Search?

    - by David Dorf
    An awful lot of referrals to e-commerce sites come from web searches. Retailers rely on search engine optimization (SEO) to correctly position their website so they can be found. Search on "blue jeans" and the results are determined by a semi-secret algorithm -- in my case Levi.com, Banana Republic, and ShopStyle show up. The NY Times recently uncovered a situation where JCPenney, via third-parties hired to help with SEO, was caught manipulating search results so they were erroneously higher in page rankings. No doubt this helped drive additional sales during this part Christmas. The article, The Dirty Little Secrets of Search, is well worth reading. My friend Ron Kleinman started an interesting discussion at the ARTS Linkedin forum. He posed the question: The ability of a single company to "punish" any retailer (by significantly impacting their on-line sales volume) who does not play by their rules ... is this a good thing or a bad thing? Clearly JCP was in the wrong and needed to be punished, but should that decision lie with Google alone? Don't get me wrong -- I'm certainly not advocating we create a Department of Search where bureaucrats think of ways to spend money, but Google wields an awful lot of power in this situation, and it makes me feel uncomfortable. Now Google is incorporating more social aspects into their search results. For example, when Google knows its me (i.e. I'm logged in when using Google) search results will be influenced by my Twitter network. In an effort to increase relevance, the blogs and re-tweeted articles from my network will be higher in the search results than they otherwise would be. So in the case of product searches, things discussed in my network will rise to the top. Continuing my blue jean example, if someone in my network had been discussing Macy's perhaps they would now be higher in the result set. soapbox: I already have lots of spammers posting bogus comments to this blog in an effort to create additional links to their sites and thus increase their search ranking. Should I expect a similar situation in Twitter and eventually Facebook? Now retailers need to expand their SEO efforts to incorporate social media as well, but do us all a favor and please don't cheat.

    Read the article

  • Ubuntu 11.10 running in windows 7 (wubi) AND on a separate partition

    - by Pareen
    I am in a very strange situation and need some help: I installed Ubuntu 11.10 through Wubi a while back so that I can use it alongside Windows 7. I was running out of space on my disk when trying to install applications. Without understanding how Wubi worked, I partitioned my C drive (creating a new 90 GB partition) in Windows, booted from the Ubuntu 11.10 install/live disk, and used the "something else" option to create a ext4 (setting the mount point to root) and swap space partitions (/sda5 and /sda6). After the install, my computer no longer boots with the previous Wubi menu and is now using the Linux grub. The options I have are /sda2, which boots Windows 7; /sda1, which doesn't do anything and reloads the same menu, and the run Linux options. So I now have Ubuntu running on a separate partition, as well as the original Wubi install. I want to delete the seperate partition and go back to running Ubuntu on Wubi...if I remove the partition will I need the Windows 7 disk to restore the boot loader? I dont have the Windows 7 disk on me so what is the best way to clean this up so I get rid of the seperate partition? -------------------------------------------------UPDATE----------------------------------------------------- ============================================================================================================ thank you so much for your response. Actually, it would be fantastic if I could migrate my Wubi install into the new partition because I had downloaded the AOSP on the Wubi install (as well as other files) and would love to preserve them. If i can do that and work on the new partition with the old files than that would be great, and I can worry about wiping out the partition completely later on i.e. when I have the windows disk or something. Can you tell me how to do this migration?? So when I select the /sda2, it loads up my Windows. If i click on the Linux, it loads up the newly install Linux (my files that were on the Wubi install aren't there) fine. If I click on the /sda1 (SYSTEM_DRIVE... this is what the Wubi was using to boot the menu that let me select Windows 7 or Ubuntu)... it fails and just reloads the original menu. Here is the link to my boot info script http://pastebin.com/dMrY0NL3

    Read the article

  • Listen to Local FM Radio in Windows 7 Media Center

    - by DigitalGeekery
    If you have a supported tuner card and connected FM antenna, you can listen to your favorite local over-the-air FM stations in Windows 7 Media Center. Before the FM radio option will be available in Windows Media Center, you’ll need to have a TV or Radio tuner card installed and configured. If you have a TV tuner card installed, you may already have a Radio tuner as well. Many TV tuner cards also have built in FM tuners. Open Windows Media Center, scroll the “Music” and over to “Radio.” Click on “FM Radio.”   The radio will turn on and you’ll see the current station number listed in the white box. Just below are standard “Seek” and “Tune” buttons, as well as “Preset” options. Tuning works just like a typical FM radio. Click on the (-) or (+) buttons to “Tune” or “Seek” up and down the dial. If you already know the frequency of the station, enter the numbers using the numeric keypad on the remote control or keyboard. To save the current station you’re listening to as a preset, click on the “Save as Preset” button. Type in a custom name for your preset station and click “Save.”   Once you set your presets, they will also be available on the main FM Radio screen. The transport controls at the bottom of the screen also allow you to control Volume, Pause, Play, Skip back, and Skip forward. Fast Forward and Rewind, however, are not supported.   This is a nice option if you’d like to listen to your local FM favorites on your computer, especially if those stations aren’t available online. If you don’t have an FM tuner and want to listen to thousands of online radio streams, check out our article on RadioTime in WMC. Similar Articles Productive Geek Tips Listen to Over 100,000 Radio Stations in Windows Media CenterListen To XM Radio with Windows Media Center in Windows 7Using Netflix Watchnow in Windows Vista Media Center (Gmedia)Schedule Updates for Windows Media CenterIntegrate Hulu Desktop and Windows Media Center in Windows 7 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional The Growth of Citibank Quickly Switch between Tabs in IE Windows Media Player 12: Tweak Video & Sound with Playback Enhancements Own a cell phone, or does a cell phone own you? Make your Joomla & Drupal Sites Mobile with OSMOBI Integrate Twitter and Delicious and Make Life Easier

    Read the article

  • IDC Analyst Report Touts Oracle–Accenture Strategic Initiative

    - by kristin.jellison
    Hi there, partners! Oracle Engineered Systems have been getting some love lately, and we want to share it with you! The market intelligence and advisory firm IDC recently released a report lauding Oracle and Accenture’s strategic initiative to route the performance and flexibility of Oracle Engineered Systems to clients. The report, "Oracle and Accenture Strategic Alliance Places Big Bet on Engineered Systems,” by Steve White, reflects a largely positive analysis of the relationship. White notes that the alliance is “one of the largest in the industry.” Under the relationship, Accenture has incorporated Oracle Engineered Systems—including Oracle Exadata Database Machine, Oracle Exalogic Elastic Cloud, Oracle SuperCluster, and Oracle Exalytics In-Memory Machine—into its leading datacenter transformation consulting services. Together, the two companies have also created bespoke platforms, such as the Accenture Foundation Platform for Oracle, which helps clients accelerate deployments on Oracle Fusion Middleware, running Oracle Exalogic Elastic Cloud and Oracle Exadata Database Machine. Oracle Engineered Systems deliver a single, engineered platform—including server to storage and networking. This makes it easier and cheaper for Accenture clients around the world to prepare their datacenters for managing, processing and analyzing the massive amounts of data they (rightly) anticipate seeing in the next decade. The new solutions can help reduce the effort and cost to migrate any vendor database to an Oracle Engineered Systems platform, which can lower the cost of ownership by up to 50 percent. For its part, Accenture has built a team of 300 consultants to implement and increase the flexibility and stability of client datacenters. This move further expands one of the fastest-growing full-service Oracle Enterprise solutions. Over 52,000 Accenture consultants are qualified to implement, upgrade and outsource the Oracle product suite. Accenture is a Diamond-level member of Oracle PartnerNetwork (OPN). For Oracle Partners, this update should give you at least two things to walk away with. First, this initiative is showing signs of success. As Marty Cole, group chief executive for Accenture’s Technology growth platform, put it, “We are seeing an increasing number of clients recognizing the value of consolidating their databases and taking advantage of the cost and performance benefits delivered by these solutions.” The pipeline is there—and not just for Accenture. Use this example to show your clients that investments in Oracle Engineered Systems are on the rise. Second, recognize that Oracle Engineered Systems represent one of the biggest platforms for growth that Oracle has to offer partners. As part of the agreement, Accenture is able to provide: Platform Readiness Assessments Platform Implementation App Rationalization Database Rationalization Managed Services These are all enablement opportunities you can offer customers under Oracle’s partner programs —to continue building the value of their investments, and the value of your relationship with Oracle. Take a read through the IDC report. To learn more about the partnership, see this press release. Happy selling! The OPN Communications Team

    Read the article

  • Fun with Sun Ray, 3D, Oracle VM x86 and SRIOV

    - by wim.coekaerts
    One of the things I like about my job is that I get to play around with stuff and make use of the technologies we work on in my teams. Sort of my own little playground. It allows me to study the products in great detail and put them to use in ways that individual product teams don't always intend them to be used for :) but that makes it fun. I have a lot of this set up at home because... work is sort of hobby and I just like to tinker with it. Anyway, a few weeks ago I was looking at my sun ray rig at home and how well 3D works. Google Earth and some basic opengl tests like glxspheres combined with virtualgl. It resulted in some very cool demos recorded with my little camera (sorry for the crappy quality of the video :-) : OVDC (soft client) on my mac Sun Ray 2FS Never mind the hickups during zoom, that's because I was using the scrollwheel on my mouse and I can't scroll uninterrupted :) Anyway, this is quite cool ! The setup for this was the following : Sun Ray on LAN, Sun Ray Server 5 latest installed on OL5.5 inside a VM running on Oracle VM 2.2 (hardware virt, with a virtual network (vif)) and the virtualgl rendering happened on another box (wopr5) that runs linux on a little atom D520 with an ION2 gpu. So network goes from Sun Ray to Sun Ray Server to wopr5 and back. Given that this is full screen 3D it puts a good amount of load on the network and it's pretty cool that SRS was just a VM :) So, separately, I had written a little blog entry about using sriov and oracle vm a while back. link to sriov blog entry Last night when I came home I wanted to do some more playing around with SRIOV and live migrate. To do this, I wanted to set up a VM with 2 network interfaces, one virtual network (vif) and then one that's one of the SRIOV virtual functions from my network card. Inside the guest they show as eth0 and eth1, and then bond them using a standard linux bonding device (bond0 here) with active active links. The goal here is that on live migrate, we would detach the VF (eth1 in guest in this case), the bond would then just hum along on eth0 (vif) we can live migrate the VM and then on the other server after the migrate completes we re-attach a VF to the VM there and eth1 pops up again and the bond uses both eth0/eth1 to do its work. So, to set this up, I figured, why not use my sun ray server VM because the 3D work generates a nice network load and is very latency/timing sensitive. In the end, I ran glxspheres on my sunray server (vm) displaying on my sun ray 2 fs and while that was running, I did my live migrate test of this vm (unplug pci VF, migrate, reconnect vf) and guess what, it just kept running :) veryyyyyy cool. now, it was supposed to, but it's always nice to see it actually work, for real. Here's a diagram of it. No gimics - just real technology at work ! enjoy :)

    Read the article

  • PASS 13 Dispatches: moving to the cloud

    - by Tony Davis
    PASS Summit 13, Day 1 keynote by Quentin Clarke and we're hearing about “redefiniing mission critical in the cloud”. With a move to the Windows Azure cloud comes the promise of capacity on demand, automatic HA, backups, patching and so on, as well as passing responsibility to MS for managing hardware, upgrades and so on. However, for many databases and applications the best route to the cloud is not necessarily obvious. For most, the path of least resistance is IaaS – SQL Server in a Azure VM. It removes the hardware burden but you still have to manage your databases and implementing HA for SQL Server is your responsibility. Also, scaling up comes at quite a cost – the biggest VM (8 CPU cores, 56 GB RAM, 16 1TB drives with 500 IOPS each) weighs in at over over $4500 per month. With PaaS, in the form of Windows SQL Database, you get a “3-copies replica set” so HA comes out-of the box, and removes the majority of the administration burden, but you are moving your database into a very different environment. For a start, it's a shared environment, with other customers using the same compute nodes in the cluster, and potentially even sharing the same database (multi-tenancy). Unless you pay for SQL DB Premium edition, the resources available for your workload will depends on how nicely others “play” in the shared environment. You'll potentially need to do a lot of tuning, and application rewriting to avoid throttling issues, optimising application-database communication to deal with increased latency between the two, and so on. You'll need aggressive application caching. You'll also need retry logic and to deal with (expected) node failure and the need to reconnect. In Tuesday's PASS Summit pre-con from the SQLCAT team, they spent a lot of time covering some of the telemetric techniques (collect into Azure storage the necessary monitoring data) to perform capacity planning, work out the hotspots and bottlenecks in your cloud applications. Tools like WAD (Windows Azure Diagnostics), performance counters SQL Database DMVs, and others, will be essential. Of course, to truly exploit the vast horizontal scaling that is available from the existence of thousands of compute nodes, you'll also need to need to consider how to “shard” your data so Azure can move it between nodes at will. Finding the right path to the Cloud isn't easy, but it's coming. I spoke to people one year ago who saw no real benefit in trying to move their infrastructure and databases to the cloud, but now at their company, it's the conversation that won't go away. Tony.  

    Read the article

  • Free eBook with SQL Server performance tips and nuggets

    - by Claire Brooking
    I’ve often found that the kind of tips that turn out to be helpful are the ones that encourage me to make a small step outside of a routine. No dramatic changes – just a quick suggestion that changes an approach. As a languages student at university, one of the best I spotted came from outside the lecture halls and ended up saving me time (and lots of huffing and puffing) – the use of a rainbow of sticky notes for well-used pages and letter categories in my dictionary. Simple, but armed with a heavy dictionary that could double up as a step stool, those markers were surprisingly handy. When the Simple-Talk editors told me about a book they were planning that would give a series of tips for developers on how to improve database performance, we all agreed it needed to contain a good range of pointers for big-hitter performance topics. But we wanted to include some of the smaller, time-saving nuggets too. We hope we’ve struck a good balance. The 45 Database Performance Tips eBook covers different tips to help you avoid code that saps performance, whether that’s the ‘gotchas’ to be aware of when using Object to Relational Mapping (ORM) tools, or what to be aware of for indexes, database design, and T-SQL. The eBook is also available to download with SQL Prompt from Red Gate. We often hear that it’s the productivity-boosting side of SQL Prompt that makes it useful for everyday coding. So when a member of the SQL Prompt team mentioned an idea to make the most of tab history, a new feature in SQL Prompt 6 for SQL Server Management Studio, we were intrigued. Now SQL Prompt can save tabs we have been working on in SSMS as a way to maintain an active template for queries we often recycle. When we need to reuse the same code again, we search for our saved tab (and we can also customize its name to speed up the search) to get started. We hope you find the eBook helpful, and as always on Simple-Talk, we’d love to hear from you too. If you have a performance tip for SQL Server you’d like to share, email Melanie on the Simple-Talk team ([email protected]) and we’ll publish a collection in a follow-up post.

    Read the article

  • Book Review (Book 11) - Applied Architecture Patterns on the Microsoft Platform

    - by BuckWoody
    This is a continuation of the books I challenged myself to read to help my career - one a month, for year. You can read my first book review here, and the entire list is here. The book I chose for April 2012 was: Applied Architecture Patterns on the Microsoft Platform. I was traveling at the end of last month so I’m a bit late posting this review here. Why I chose this book: I actually know a few of the authors on this book, so when they told me about it I wanted to check it out. The premise of the book is exactly as it states in the title - to learn how to solve a problem using products from Microsoft. What I learned: I liked the book - a lot. They've arranged the content in a "Solution Decision Framework", that presents a few elements to help you identify a need and then propose alternate solutions to solve them, and then the rationale for the choice. But the payoff is that the authors then walk through the solution they implement and what they ran into doing it. I really liked this approach. It's not a huge book, but one I've referred to again since I've read it. It's fairly comprehensive, and includes server-oriented products, not things like Microsoft Office or other client-side tools. In fact, I would LOVE to have a work like this for Open Source and other vendors as well - would make for a great library for a Systems Architect. This one is unashamedly aimed at the Microsoft products, and even if I didn't work here, I'd be fine with that. As I said, it would be interesting to see some books on other platforms like this, but I haven't run across something that presents other systems in quite this way. And that brings up an interesting point - This book is aimed at folks who create solutions within an organization. It's not aimed at Administrators, DBA's, Developers or the like, although I think all of those audiences could benefit from reading it. The solutions are made up, and not to a huge level of depth - nor should they be. It's a great exercise in thinking these kinds of things through in a structured way. The information is a bit dated, especially for Windows and SQL Azure. While the general concepts hold, the cloud platform from Microsoft is evolving so quickly that any printed book finds it hard to keep up with the improvements. I do have one quibble with the text - the chapters are a bit uneven. This is always a danger with multiple authors, but it shows up in a couple of chapters. I winced at one of the chapters that tried to take a more conversational, humorous style. This kind of academic work doesn't lend itself to that style. I recommend you get the book - and use it. I hope they keep it updated - I'll be a frequent customer. :)  

    Read the article

  • So long Oracle ...

    - by arungupta
    ... and thanks for all the fish! This Friday (October 18, 2013) is my last day at Oracle. After Publishing almost 1400 blog entries with 5500+ comments on them Working in the Java EE team since inception Visiting 35+ countries and several cities around the world Speaking at all major Java conferences and lots of Java User Groups 15-year alumni of JavaOne as staff Meeting and working with best of the best in the Java community Most importantly having lots of fun Its time for me to move on! No new blog entries will be posted on this blog. Feel free to subscribe to The Aquarium for latest updates on Java EE and GlassFish. I'll continue to publish all the excellent content that you've been used to at blog.arungupta.me now onwards. Read my new blog to learn about my new adventures! Here are some of the conference badges collected over the past years ... And the cities visited ... View Cities Visited by "Miles To Go..." in a larger map The comments on this blog are disabled as I'll not be able to respond to them. Feel free to leave comments on the new blog and I'd love to follow up with you there. Thank you very much for all the support that has been shown on this blog. I'd like to conclude with a Hindi song that I've been humming for the past few days now ... Abhi alvida mat kaho doston ... Na jaane kahan phir mulaqaat ho ... Kyonki ... Beete huye lamhon ki kasak saath to hogi ... Khawabon mein hi ho chahe mulaqaat to hogi ... For my non-Hindi readers, here is my paraphrased meaning ... Don't say goodbye yet my friends ... We'll likely meet somewhere else ... Because ... We'll always have the memories of the wonderful time spent together ... May be in dreams but we will meet again ... With that, over and out, and see you at blog.arungupta.me!

    Read the article

  • JavaScript in different browsers

    - by PointsToShare
    Adventures with JavaScript rendered in IE 8, Chrome 15, and Firefox 8.0 I have written a little monogram about the advantages of Math and wrote a few JavaScript applications to demonstrate them. I was a bit careless and used elements on the page in my JavaScript without using any of the GetElementsByXXXX methods to identify them.  Say I had a text box named tbSeqNum into which I entered a number to be used in a computation. In my code I simply referred to its value by using it directly. Like here: Function Blah() {                 return tbSeqNum.value; } This ran fine in IE8. In IE, the elements are available as global variables. This is not the case in either Firefox or Chrome. In there one has to create the variable and only then use it. Assuming I also used tbSeqNum as the element’s ID, this works: Function Blah() {                 return GetElementById(“tbSeqNum”).value; } Naturally this corrected function also works in IE, so be warned. Also, coming from windows programming (I am long in the tooth and programmed long before the internet), I have a habit of putting an “Exit” button on my pages and setting their onclick to: onclick=”window.close()”. Again, this works fine in IE. In Firefox and chrome, it does not! There you can only close a window that you opened in the code. A window that was opened by navigation to a URL will not close.  Before I deployed mu code to my website, I painfully removed all my Exit buttons. But my greatest surprise came when I tested my pages in the various browsers. In my code I do a comparison on the performance of two algorithms used to solve the same problem. One is brute force, the other uses a mathematical formula. The compare functions runs each many times and displays the time it took for each and also the ratio. Chrome runs JavaScript between 5 and 10 times faster than Firefox and between 50 and 100 times faster that IE. Wow!!! This difference is especially remarkable when the code uses iteration. I suspect that the JS engines in Chrome and Firefox simply cache the result of a function and if it is called again with the same parameters, it returns the cached result. To see it in action play run the “How Many Squares” page in www.mgsltns.com/games.htm The host is running on Unix, so the link is case sensitive. Last Note: IE9 runs JS a bit faster, but still lags behind almost as badly. That’s All Folks!

    Read the article

  • The best Bar on the globe is ... in Seoul/Korea

    - by Mike Dietrich
    As you know already sometimes I write about things which really don't have to do anything with a database upgrade. So if you are looking for tips and tricks and articles about that topic please stop reading now Actually I'm not a lets-go-to-a-bar person. I enjoy good food and a fine dessert wine afterwards. But last week in Seoul/Korea Ryan, our local host, did ask us after a wonderful dinner at a Korean Barbecue place if we'd like to visit a bar. I was really tired as I flew into Seoul overnight from Sunday to Monday arriving Monday early morning, getting shower, breakfast - and then a full day of very good and productive customer meetings. But one thing Ryan mentioned catched my immediate attention: The owner of the bar collects records and has a huge tube amp stereo system - and you can ask him to play your favorite songs. The bar is called "Peter, Paul and Mary" - honestly not my favorite style of music. And I even coulnd't find a webpage or an address - only that little piece of information on Facebook. But after stepping down the stairs to the cellar my eyes almost poped out of my head. This is the audio system: Enourmus huge corner horn loudspeakers from Western Electric. Pretty old I'd suppose but delivering an incredible present dynamics into the room. And plenty of tube equipment from Jadis, NSA Labs and Shindo Laboratories Western Electric 300B Limited amps from Tokyo. And the owner (I was so amazed I had simply forgotten to ask for his name) collects records since 40 years. And we had many wishes that night. Actually when we did enter Peter, Paul and Mary he played an old Helloween song. That must have been destiny. A German entering a bar in Korea and the owner is playing an old song by one of Germany's best heavy metal bands ever. And it went on with the Doors, Rainbow's Stargazer, Scorpions, later Deep Purple's Perfect Strangers, a bit of Santana, Carly Simon, Jimi Hendrix, David Bowie ...Ronnie James Dio's Holy Diver, Gary Moore, Peter Gabriel's San Jacinto ... and many many more great songs ... Of course we were the last guests leaving the place at 2am in the morning - and I've never ever had a better night in a bar before ... I could have stayed days listening to so many records  ... Thanks Ryan, that was a phantastic night! -Mike

    Read the article

< Previous Page | 274 275 276 277 278 279 280 281 282 283 284 285  | Next Page >