Search Results

Search found 30964 results on 1239 pages for 'tutor best practice'.

Page 423/1239 | < Previous Page | 419 420 421 422 423 424 425 426 427 428 429 430  | Next Page >

  • Pack of resources in one big file with XNA

    - by Cristian
    Is it possible to pack all the little .xnb files into one big file? Given the level of abstraction of the XNA Framework I though this would come out of the box but I can't find any well integrated solution. So far the best candidate is XnaZip but in addition to having to compile the resources in a post-build event, and a little trouble porting the game to XBOX I have to rename all the references to resources I have already implemented.

    Read the article

  • Diving into Scala with Cay Horstmann

    - by Janice J. Heiss
    A new interview with Java Champion Cay Horstmann, now up on otn/java, titled  "Diving into Scala: A Conversation with Java Champion Cay Horstmann," explores Horstmann's ideas about Scala as reflected in his much lauded new book,  Scala for the Impatient.  None other than Martin Odersky, the inventor of Scala, called it "a joy to read" and the "best introduction to Scala". Odersky was so enthused by the book that he asked Horstmann if the first section could be made available as a free download on the Typesafe Website, something Horstmann graciously assented to. Horstmann acknowledges that some aspects of Scala are very complex, but he encourages developers to simply stay away from those parts of the language. He points to several ways Java developers can benefit from Scala: "For example," he says, " you can write classes with less boilerplate, file and XML handling is more concise, and you can replace tedious loops over collections with more elegant constructs. Typically, programmers at this level report that they write about half the number of lines of code in Scala that they would in Java, and that's nothing to sneeze at. Another entry point can be if you want to use a Scala-based framework such as Akka or Play; you can use these with Java, but the Scala API is more enjoyable. " Horstmann observes that developers can do fine with Scala without grasping the theory behind it. He argues that most of us learn best through examples and not through trying to comprehend abstract theories. He also believes that Scala is the most attractive choice for developers who want to move beyond Java and C++.  When asked about other choices, he comments: "Clojure is pretty nice, but I found its Lisp syntax a bit off-putting, and it seems very focused on software transactional memory, which isn't all that useful to me. And it's not statically typed. I wanted to like Groovy, but it really bothers me that the semantics seems under-defined and in flux. And it's not statically typed. Yes, there is Groovy++, but that's in even sketchier shape. There are a couple of contenders such as Kotlin and Ceylon, but so far they aren't real. So, if you want to do work with a statically typed language on the JVM that exists today, Scala is simply the pragmatic choice. It's a good thing that it's such a nice choice." Learn more about Scala by going to the interview here.

    Read the article

  • Essbase 11.1.2 - AgtSvrConnections Essbase Configuration Setting

    - by Ann Donahue
    AgtSvrConnections is a documented Essbase configuration setting used in conjunction with the AgentThreads and ServerThreads settings. Basically, when a user logs into Essbase, the AgentThreads connects to the ESSBASE process then the AgtSvrConnections will connect the ESSBASE process to the ESSSVR application process which then the ServerThreads are used for end user activities. In Essbase 11.1.2, the default value of the AgtSvrConnections setting was changed to 5. In previous Essbase releases, the AgtSvrConnections setting default value is 1. It is recommended that tuning the AgtSvrConnections settings be done incrementally by 1 or 2 maximum and based on the number of concurrent Set Active/Clear Active calls. In the Essbase DBA Guide and Technical Reference, the maximum setting recommended is to not exceed what is set for AgentThreads, however, we have found that most customers do not need to exceed a setting of 10. In general, it is ok to set AgtSvrConnections close to the AgentThreads setting, however, there have been customers that needed an AgentThread setting greater than 10 and we have found that the AgtSvrConnections setting higher than 5-10 could have a negative impact on Essbase due to too many TCP ports used unnecessarily. As with all Essbase.cfg settings, it is best to set values to what is needed based on process load and not arbitrarily set to high values. In order to monitor and tune the AgtSvrConnections setting, monitor the application log for logins and Set Active/Clear Active messages. If there are a lot of logins and Set Active/Clear Active messages happening in a short period of time making it appear that the login is taking longer, incrementally increase the AgtSvrConnections setting by 1 or 2, which can then help with login speed. The login performance tolerance is different from one customer environment to another since there are other factors that can impact this performance i.e. network latency. What is happening in Essbase when a user logs in: ESSBASE issues a Set Active to the ESSSVR process. Each application has its own ESSSVR process. Set Active then calls MultipleAsyncLogout and waits on the pipe connection. MultipleAsyncLogout goes back to ESSBASE. ESSBASE then needs to send the logout back to the ESSSVR process. When the AgtSvrConnections setting needs to be increased from the default of 5, it is because Essbase cannot find a connection since the previous connections are used by ESSBASE-ESSSVR. In this example, we may want to increase AgtSvrConnections from 5 to 7 to improve the login performance. Again, it is best to set Essbase settings to what is needed based on process load and not arbitrarily set to high values. In general, stress or performance testing environments using automated tools may need higher than normal settings. This is because automated processes run at high speeds for logging in and logging out. Typically, in a real life production environment, the settings are much closer to default values.

    Read the article

  • ATI (fglrx) Dual monitor / laptop hot-plugging

    - by Brendan Piater
    I feel like I've gone back 5 years on my desktop today. I'll try not dump to much frustration here... I been running 12.04 since alpha with the ATI open source drivers and the gnome 3 desktop. I been generally very happy with them with only small issues along the way. Now of course it does not support 3D acceleration 100%, so games like my newly purchased Amnesia from the Humble bundle would not play. OK, no worries, the ATI driver is in the repos so let me have a go I thought. With all this testing that's been done with multi-monitor support, what could go wrong...? How I use my computer: It's laptop, with a HD 3670 card in it. I spend about 50% of the time working directly on the laptop (at home) and about 50% of the time working with an additional display connected (at work), multi desktop environment. What happening now: installed drivers things seemed to working, save some small other bugs (not critical) this morning I take my machine and plug the additional monitor into it, and nothing happens... ok fine. open "displays" try configure dual display, won't work open ati config "thing" (cause it is a thing, a crap thing) and set-up monitors there reboot it says (oh ffs, really.... ok) reboot, login and wow, I got a gnome 2 desktop (presume gnome 3 fall back) and no multi-monitor...great. (screenshot: http://ubuntuone.com/5tFe3QNFsTSIGvUSVLsyL7 ) after getting into a situation where I had to Ctrl + Alt + Del to get out of a frozen display, I eventually manage to set-up a single display desktop on the "main" monitor ok.. time to go home... unplug monitor... nothing happens.. oh boy here we go... try displays again, nothing, just hangs the display.. great. crash all the apps and reboot... So it's been a trying day... What I really hope is that someone else has figured out how to avoid this PAIN. Please help with a solution that: allows me run fglrx (so I can run the games I want) allows me to hot-plug a monitor to my laptop and remove it again allows me to change the display so include the hot-plugged monitor (preferable automatically like it did with the open drivers) Next best if that's not possible: switch between laptop only display and monitor only display easily (i.e. not having to reboot/logout/suspened etc) Really appreciate the time of anyone that has a solution. Thanks in advance. Regards Brendan PS: I guess I should file a bug about this too, so some direction as to the best place to file this would be appreciated too.

    Read the article

  • Designing Search Engine Keyword Optimization Friendly Websites and Blogs

    To generate free targeted traffic, you must know how to go about search engine keyword optimization. This is because optimizing your website or blog for placement on the first page of Google or Yahoo is the best online money making secret. Google, MSN and Yahoo are used by most people to search for information thus if you can optimize your website pages for common keywords you will definitely attract traffic.

    Read the article

  • Distributed C++ game server which use database.

    - by Slav
    Hello. My C++ turn-based game server (which uses database) does stand against current average amount of clients (players), so I want to expand it to multiple (more then one) amount of computers and databases where all clients still will remain within single game world (servers will must communicate with each other and use multiple databases). Is there some tutorials/books/common standards which explain how to do it in a best way?

    Read the article

  • External File Upload Optimizations for Windows Azure

    - by rgillen
    [Cross posted from here: http://rob.gillenfamily.net/post/External-File-Upload-Optimizations-for-Windows-Azure.aspx] I’m wrapping up a bit of the work we’ve been doing on data movement optimizations for cloud computing and the latest set of data yielded some interesting points I thought I’d share. The work done here is not really rocket science but may, in some ways, be slightly counter-intuitive and therefore seemed worthy of posting. Summary: for those who don’t like to read detailed posts or don’t have time, the synopsis is that if you are uploading data to Azure, block your data (even down to 1MB) and upload in parallel. Set your block size based on your source file size, but if you must choose a fixed value, use 1MB. Following the above will result in significant performance gains… upwards of 10x-24x and a reduction in overall file transfer time of upwards of 90% (eg, uploading a 1GB file averaged 46.37 minutes prior to optimizations and averaged 1.86 minutes afterwards). Detail: For those of you who want more detail, or think that the claims at the end of the preceding paragraph are over-reaching, what follows is information and code supporting these claims. As the title would indicate, these tests were run from our research facility pointing to the Azure cloud (specifically US North Central as it is physically closest to us) and do not represent intra-cloud results… we have performed intra-cloud tests and the overall results are similar in notion but the data rates are significantly different as well as the tipping points for the various block sizes… this will be detailed separately). We started by building a very simple console application that would loop through a directory and upload each file to Azure storage. This application used the shipping storage client library from the 1.1 version of the azure tools. The only real variation from the client library is that we added code to collect and record the duration (in ms) and size (in bytes) for each file transferred. The code is available here. We then created a directory that had a collection of files for the following sizes: 2KB, 32KB, 64KB, 128KB, 512KB, 1MB, 5MB, 10MB, 25MB, 50MB, 100MB, 250MB, 500MB, 750MB, and 1GB (50 files for each size listed). These files contained randomly-generated binary data and do not benefit from compression (a separate discussion topic). Our file generation tool is available here. The baseline was established by running the application described above against the directory containing all of the data files. This application uploads the files in a random order so as to avoid transferring all of the files of a given size sequentially and thereby spreading the affects of periodic Internet delays across the collection of results.  We then ran some scripts to split the resulting data and generate some reports. The raw data collected for our non-optimized tests is available via the links in the Related Resources section at the bottom of this post. For each file size, we calculated the average upload time (and standard deviation) and the average transfer rate (and standard deviation). As you likely are aware, transferring data across the Internet is susceptible to many transient delays which can cause anomalies in the resulting data. It is for this reason that we randomized the order of source file processing as well as executed the tests 50x for each file size. We expect that these steps will yield a sufficiently balanced set of results. Once the baseline was collected and analyzed, we updated the test harness application with some methods to split the source file into user-defined block sizes and then to upload those blocks in parallel (using the PutBlock() method of Azure storage). The parallelization was handled by simply relying on the Parallel Extensions to .NET to provide a Parallel.For loop (see linked source for specific implementation details in Program.cs, line 173 and following… less than 100 lines total). Once all of the blocks were uploaded, we called PutBlockList() to assemble/commit the file in Azure storage. For each block transferred, the MD5 was calculated and sent ensuring that the bits that arrived matched was was intended. The timer for the blocked/parallelized transfer method wraps the entire process (source file splitting, block transfer, MD5 validation, file committal). A diagram of the process is as follows: We then tested the affects of blocking & parallelizing the transfers by running the updated application against the same source set and did a parameter sweep on the block size including 256KB, 512KB, 1MB, 2MB, and 4MB (our assumption was that anything lower than 256KB wasn’t worth the trouble and 4MB is the maximum size of a block supported by Azure). The raw data for the parallel tests is available via the links in the Related Resources section at the bottom of this post. This data was processed and then compared against the single-threaded / non-optimized transfer numbers and the results were encouraging. The Excel version of the results is available here. Two semi-obvious points need to be made prior to reviewing the data. The first is that if the block size is larger than the source file size you will end up with a “negative optimization” due to the overhead of attempting to block and parallelize. The second is that as the files get smaller, the clock-time cost of blocking and parallelizing (overhead) is more apparent and can tend towards negative optimizations. For this reason (and is supported in the raw data provided in the linked worksheet) the charts and dialog below ignore source file sizes less than 1MB. (click chart for full size image) The chart above illustrates some interesting points about the results: When the block size is smaller than the source file, performance increases but as the block size approaches and then passes the source file size, you see decreasing benefit to the point of negative gains (see the values for the 1MB file size) For some of the moderately-sized source files, small blocks (256KB) are best As the size of the source file gets larger (see values for 50MB and up), the smallest block size is not the most efficient (presumably due, at least in part, to the increased number of blocks, increased number of individual transfer requests, and reassembly/committal costs). Once you pass the 250MB source file size, the difference in rate for 1MB to 4MB blocks is more-or-less constant The 1MB block size gives the best average improvement (~16x) but the optimal approach would be to vary the block size based on the size of the source file.    (click chart for full size image) The above is another view of the same data as the prior chart just with the axis changed (x-axis represents file size and plotted data shows improvement by block size). It again highlights the fact that the 1MB block size is probably the best overall size but highlights the benefits of some of the other block sizes at different source file sizes. This last chart shows the change in total duration of the file uploads based on different block sizes for the source file sizes. Nothing really new here other than this view of the data highlights the negative affects of poorly choosing a block size for smaller files.   Summary What we have found so far is that blocking your file uploads and uploading them in parallel results in significant performance improvements. Further, utilizing extension methods and the Task Parallel Library (.NET 4.0) make short work of altering the shipping client library to provide this functionality while minimizing the amount of change to existing applications that might be using the client library for other interactions.   Related Resources Source code for upload test application Source code for random file generator ODatas feed of raw data from non-optimized transfer tests Experiment Metadata Experiment Datasets 2KB Uploads 32KB Uploads 64KB Uploads 128KB Uploads 256KB Uploads 512KB Uploads 1MB Uploads 5MB Uploads 10MB Uploads 25MB Uploads 50MB Uploads 100MB Uploads 250MB Uploads 500MB Uploads 750MB Uploads 1GB Uploads Raw Data OData feeds of raw data from blocked/parallelized transfer tests Experiment Metadata Experiment Datasets Raw Data 256KB Blocks 512KB Blocks 1MB Blocks 2MB Blocks 4MB Blocks Excel worksheet showing summarizations and comparisons

    Read the article

  • Is event sourcing ready for prime time?

    - by Dakotah North
    Event Sourcing was popularized by LMAX as a means to provide speed, performance scalability, transparent persistence and transparent live mirroring. Before being rebranded as Event Sourcing, this type of architectural pattern was known as System Prevalence but yet I was never familiar with this pattern before the LMAX team went public. Has this pattern proved itself in numerous production systems and therefore even conservative individuals should feel empowered to embrace this pattern or is event sourcing / system prevalence an exotic pattern that is best left for the fearless?

    Read the article

  • 8 Free WordPress Instagram Plugins

    - by Ravish
    Instagram is a fast, funny and more beautiful free photo-sharing app. You can transfer a picture or snap as according you by giving a great look to picture. You can share the instagram photos to your friend via any social network site with all update which you have make. There are several WordPress Plugin are [...] Related posts:10 WordPress Plugins For Google Adsense Best & Free 4 Amazon Affiliate Plugins For WordPress Integrating Flickr with WordPress

    Read the article

  • SQL Rally Presentations

    - by AllenMWhite
    As I drove to Dallas for this year's SQL Rally conference (yes, I like to drive) I got a call asking if I could step in for another presenter who had to cancel at the last minute. Life happens, and it's best to be flexible, and I said sure, I can do that. Which presentation would you like me to do? (I'd submitted a few presentations, so it wasn't a problem.) So yesterday I presented "Gathering Performance Metrics With PowerShell" at 8:45AM, and my newest presentation, "Manage SQL Server 2012 on Windows...(read more)

    Read the article

  • Staggered Isometric Map: Calculate map coordinates for point on screen

    - by Chris
    I know there are already a lot of resources about this, but I haven't found one that matches my coordinate system and I'm having massive trouble adjusting any of those solutions to my needs. What I learned is that the best way to do this is to use a transformation matrix. Implementing that is no problem, but I don't know in which way I have to transform the coordinate space. Here's an image that shows my coordinate system: How do I transform a point on screen to this coordinate system?

    Read the article

  • ArchBeat Link-o-Rama for 2012-08-28

    - by Bob Rhubart
    You may be tempted by IaaS, but you should PaaS on that or your database cloud journey will be a short one "The better option [to IaaS] is to rationalize the deployment stack so that VMs are needed only for exceptional cases," says B. R. Clouse. "By settling on a standard operating system and patch level, you create an infrastructure that potentially all of your databases can share. Now, the building block will be database instances or possibly schemas within databases. These components are the platforms on which you will deploy workloads, hence this is known as Platform as a Service (PaaS)." 'Shadow IT' can be the cloud's best friend | David Linthicum "I do not advocate that IT give up control and allow business units to adopt any old technology they want," says Infoworld cloud computing blogger David Linthicum. "However, IT needs to face reality: For the past three decades or so, corporate IT has been slow on the uptake around the use of productive new technologies." Do you agree? 9 ways cloud will impact IT employment | ZDNet ZDNet blogger Joe McKendrick condenses information from a recent report on how cloud computing will impact IT jobs. Number one on the list: New categories of jobs arising from cloud computing, which include "private cloud developers and administrators, departmental liaisons, integration specialists, cloud architects, and compliance specialists." Yeah, that's right, cloud architects. For more on cloud architects, including what you need to up your game to thrive in the cloud, check out "The Role of the Cloud Architect" on the OTN ArchBeat Podcast. Decisions, Decisions: The art, science, and politics of technology selection "When the time comes for a solution architect to make the final decision about the technologies, standards, and other elements that are to be incorporated into a particular project, what factors weigh most heavily on that decision? It comes as no surprise that among the architects I contacted, business needs top the list." Managing Oracle Exalogic Elastic Cloud with Oracle Enterprise Manager Ops Center Anand Akela's byline is on this post, but "Dr. Jürgen Fleischer, Oracle Enterprise Manager Ops Center Engineering" appears at the end of the post, so it's anybody's guess as to who wrote this thing. But the content includes a complete listing of the Exalogic 2.0.1 Tea Break Snippets series written by a member of the Exalogic team who goes by the name "The Old Toxophilist." So maybe the best thing to do here is ignore the names and focus on the very useful conent. Boost your infrastructure with Coherence into the Cloud | Nino Guarnacci Nino Guarnacci describes a use case that involved managing a variety of data caches that process complex queries and parallel computational operations, in order to maintain the caches in a consistent state on different server instances. Thought for the Day "No one hates software more than software developers." — Jeff Atwood Source: SoftwareQuotes

    Read the article

  • Paid Website Code Review

    - by clifgray
    I have written a pretty extensive webapp and it is going to go live in the next fews weeks and before I really publicize it I want to get some professionals to review it for optimization and best practices. Is there any online service or way to find local software engineers who would be willing to do this? Just to give some specifics that may be helpful, my site is on Google App Engine and written in Python and it is tough to find someone with extensive experience in that area.

    Read the article

  • How to convince a client that you will not steal his idea

    - by gladysbixly
    Hey all, I came across a thread entitled How To Stop A Developer From Stealing Your Business Idea and i can't help but raise a brow. The issue talks about a developer being able to pass on the idea to another and benefit from it. As a developer, what is the best way to assure your client that you will not steal his ideas? Are there any practices, laws or anything that takes care of the interests of both sides? edit: linked to thread, and i didnt understand everything that was said

    Read the article

  • Oracle SOA Partner Community Forum Lisbon, Portugal &ndash; April 21st 2010

    - by Jürgen Kress
    We would like to invite you to attend our SOA Partner Community Forum that will be in held in Lisbon, April 21, 2010 The Oracle SOA Partner Community Forum is a wonderful opportunity to: Meet with Oracle SOA and BPM Product management Exchange thoughts and knowledge with SOA and BPM experts Learn from successful SOA implementation Network within the Oracle SOA Partner Community During this highly informative event you can learn about partner success stories, participate in an array of breakout sessions, exchange information with other partners and enjoy a vibrant panel discussion. Places are limited, so register today. Registration only takes a few minutes and it is free of charge. By registration you will confirm that you will attend to the event. Seminar is free. In the event that you cancel your registration after April 16th 2010 Oracle may request that you will pay late cancellation fee of € 150. Please visit our website for further information. Alternatively, if you require assistance or have any queries please contact Jürgen Kress. Agenda 10:00     Welcome & Introduction 10:15     SOA Cloud presentation 11:15     SOA Partner Sales Campaign 12:30     Lunch break 13:15     Partner Reference Case 14:15     BPMN 2.0 15:00     Cocktail reception   Location: Lagoas Park Hotel 2740-245, Porto Salvo, Oeiras For partners with BPM 11g opportunities we will offer an advanced workshop on Thursday April 22nd and Friday April 23rd hosted by Clemens Utschig-Utschig. If you are interested please contact Jürgen Kress.   Quotes from previous SOA Partner Community Forums "The SOA Partner Community Forum was a first-rate event that provided a balanced agenda of vendor-specific and vendor-neutral content pertaining to modern-day service-oriented computing technologies and practices. I enjoyed the opportunity to provide an objective voice on the topics I consider most important for today's IT practitioners to fully leverage the many patterns, principles, and service technology innovations that comprise the next-generation SOA platform." Thomas Erl, SOA Systems Inc., SOASchool.com “The Community is an excellent forum for Partners to hear about each others success stories on SOA, especially BPEL and ODI” Jørn F. Schurink, Competence Expert Oracle Technologies Logica “The Community is the best source for information around Oracle SOA a wonderful platform with many interesting contacts and discussions”. Torsten Winterberg, Opitz Consulting “The regular meetings of the SOA Partner Community are a perfectly organized platform for learning the latest in Oracle SOA tooling by extraordinary speakers and for vivid discussions with practitioners about SOA challenges and design solutions. This is the best opportunity to build and deepen a network with the brightest and most passionate protagonists in Oracle SOA world in EMEA.” Hajo Normann, HP Services Technorati Tags: soa partner community forum,soa,event

    Read the article

  • What are some options and methods to link a contact form on WordPress to an existing form processing script?

    - by eirlymeyer
    I’m searching for the best way to link the outgoing/output data in a WordPress contact form plugin on a WordPress website to an existing MySQL database where a contact form is processed. Scenario: A new site (Site A) is being developed with a contact form. Site B (old site) uses a contact form script to process contact form leads through an existing legacy database and a ColdFusion application. The goal is to create site A with a new contact form to continue the same existing processes. Site A is to become the new Site B.

    Read the article

  • Day 3 - XNA: Hacking around with images

    - by dapostolov
    Yay! Today I'm going to get into some code! My mind has been on this all day! I find it amusing how I practice, daily, to be "in the moment" or "present" and the excitement and anticipation of this project seems to snatch it away from me frequently. WELL!!! (Shakes Excitedly) Let's do this =)! Let's code! For these next few days it is my intention to better understand image rendering using XNA; after said prototypes are complete I should (fingers crossed) be able to dive into my game code using the design document I hammered out the other night. On a personal note, I think the toughest thing right now is finding the time to do this project. Each night, after my little ones go to bed I can only really afford a couple hours of work on this project. However, I hope to utilise this time as best as I can because this is the first time in a while I've found a project that I've been passionate about. A friend recently asked me if I intend to go 3D or extend the game design. Yes. For now I'm keeping it simple. Lastly, just as a note, as I was doing some further research into image rendering this morning I came across some other XNA content and lessons learned. I believe this content could have probably been posted in the first couple of posts, however, I will share the new content as I learn it at the end of each day. Maybe I'll take some time later to fix the posts but for now Installation and Deployment - Lessons Learned I had installed the XNA studio  (Day 1) and the site instructions were pretty easy to follow. However, I had a small difficulty with my development environment. You see, I run a virtual desktop development environment. Even though I was able to code and compile all the tutorials the game failed to run...because I lacked a 3D capable card; it was not detected on the virtual box... First Lesson: The XNA runtime needs to "see" the 3D card! No sweat, Il copied the files over to my parent box and executed the program. ERROR. Hmm... Second Lesson (which I should have probably known but I let the excitement get the better of me): you need the XNA runtime on the client PC to run the game, oh, and don't forget the .Net Runtime! Sprite, it ain't just a Soft Drink... With these prototypes I intend to understand and perform the following tasks. learn game development terminology how to place and position (rotate) a static image on the screen how to layer static images on the screen understand image scaling can we reuse images? understand how framerate is handled in XNA how to display text , basic shapes, and colors on the screen how to interact with an image (collision of user input?) how to animate an image and understand basic animation techniques how to detect colliding images or screen edges how to manipulate the image, lets say colors, stretching how to focus on a segment of an image...like only displaying a frame on a film reel what's the best way to manage images (compression, storage, location, prevent artwork theft, etc.) Well, let's start with this "prototype" task list for now...Today, let's get an image on the screen and maybe I can mark a few of the tasks as completed... C# Prototype1 New Visual Studio Project Select the XNA Game Studio 3.1 Project Type Select the Windows Game 3.1 Template Type Prototype1 in the Name textbox provided Press OK. At this point code has auto-magically been created. Feel free to press the F5 key to run your first XNA program. You should have a blue screen infront of you. Without getting into the nitty gritty right, the code that was generated basically creates some basic code to clear the window content with the lovely CornFlowerBlue color. Something to notice, when you move your mouse into the window...nothing. ooooo spoooky. Let's put an image on that screen! Step A - Get an Image into the solution Under "Content" in your Solution Explorer, right click and add a new folder and name it "Sprites". Copy a small image in there; I copied a "Royalty Free" wizard hat from a quick google search and named it wizards_hat.jpg (rightfully so!) Step B - Add the sprite and position fields Now, open/edit  Game1.cs Locate the following line:  SpriteBatch spriteBatch; Under this line type the following:         SpriteBatch spriteBatch; // the line you are looking for...         Texture2D sprite;         Vector2 position; Step C - Load the image asset Locate the "Load Content" Method and duplicate the following:             protected override void LoadContent()         {             spriteBatch = new SpriteBatch(GraphicsDevice);             // your image name goes here...             sprite = Content.Load<Texture2D>("Sprites\\wizards_hat");             position = new Vector2(200, 100);             base.LoadContent();         } Step D - Draw the image Locate the "Draw" Method and duplicate the following:        protected override void Draw(GameTime gameTime)         {             GraphicsDevice.Clear(Color.CornflowerBlue);             spriteBatch.Begin(SpriteBlendMode.AlphaBlend);             spriteBatch.Draw(sprite, position, Color.White);             spriteBatch.End();             base.Draw(gameTime);         }  Step E - Compile and Run Engage! (F5) - Debug! Your image should now display on a cornflowerblue window about 200 pixels from the left and 100 pixels from the top. Awesome! =) Pretty cool how we only coded a few lines to display an image, but believe me, there is plenty going on behind the scenes. However, for now, I'm going to call it a night here. Blogging all this progress certainly takes time... However, tomorrow night I'm going to detail what we just did, plus start checking off points on that list! I'm wondering right now if I should add pictures / code to this post...let me know if you want them =) Best Regards, D.

    Read the article

  • Stupid simple music player?

    - by itsadok
    Here's what I want: I have a folder with MP3 files. I would like to play all the files in the folder, alphabetically. And I need a way to pause, skip to next file, and that's pretty much it. I don't want to use my music player to organize my music library, I don't need connection to network services, and I don't even need ID3 support. On Windows, WinAmp 2 did what I wanted. What's my best option on Ubuntu?

    Read the article

  • Building general programming skills?

    - by toleero
    Hello :) I currently am quite new to programming, I've had exposure to a few languages (C#, PHP, JavaScript, VB, and some others) and I'm quite new to OOP. I was just wondering what is the best way to build up general programming/problem solving skills without being language specific? I was thinking maybe of something like Project Euler but more geared towards newbies? Thanks! Edit: I am looking at getting into Game Scripting/Programming, I'm already in Games but in a different discipline :)

    Read the article

  • MySQL at the DOAG Conference this week in Nuremberg

    - by Bertrand Matthelié
    Planning to attend the DOAG Conference this week in Nuremberg? There will be several MySQL presentations, including the three following ones from Oracle team members: Oracle GoldenGate: Bindeglied zwischen Oracle & MySQL Datenbanken Ileana Somesan Wednesday November 21, 12:00 NoSQL and SQL: Blending the Best of Both Worlds Andrew MorganWednesday November 21, 14:00 MySQL Replikation Carsten ThalheimerWednesday November 21,  16:00 We look forward to seeing you there!

    Read the article

  • Orchestrating the Virtual Enterprise

    - by John Murphy
    During the American Industrial Revolution, the Ford Motor Company did it all. It turned raw materials into a showroom full of Model Ts. It owned a steel mill, a glass factory, and an automobile assembly line. The company was both self-sufficient and innovative and went on to become one of the largest and most profitable companies in the world. Nowadays, it's unusual for any business to follow this vertical integration model because its much harder to be best in class across such a wide a range of capabilities and services. Instead, businesses focus on their core competencies and outsource other business functions to specialized suppliers. They exchange vertical integration for collaboration. When done well, all parties benefit from this arrangement and the collaboration leads to the creation of an agile, lean and successful "virtual enterprise." Case in point: For Sun hardware, Oracle outsources most of its manufacturing and all of its logistics to third parties. These are vital activities, but ones where Oracle doesn't have a core competency, so we shift them to business partners who do. Within our enterprise, we always retain the core functions of product development, support, and most of the sales function, because that's what constitutes our core value to our customers. This is a perfect example of a virtual enterprise.  What are the implications of this? It means that we must exchange direct internal control for indirect external collaboration. This fundamentally changes the relative importance of different business processes, the boundaries of security and information sharing, and the relationship of the supply chain systems to the ERP. The challenge is that the systems required to support this virtual paradigm are still mired in "island enterprise" thinking. But help is at hand. Developments such as the Web, social networks, collaboration, and rules-based orchestration offer great potential to fundamentally re-architect supply chain systems to better support the virtual enterprise.  Supply Chain Management Systems in a Virtual Enterprise Historically enterprise software was constructed to automate the ERP - and then the supply chain systems extended the ERP. They were joined at the hip. In virtual enterprises, the supply chain system needs to be ERP agnostic, sitting above each of the ERPs that are distributed across the virtual enterprise - most of which are operating in other businesses. This is vital so that the supply chain system can manage the flow of material and the related information through the multiple enterprises. It has to have strong collaboration tools. It needs to be highly flexible. Users need to be able to see information that's coming from multiple sources and be able to react and respond to events across those sources.  Oracle Fusion Distributed Order Orchestration (DOO) is a perfect example of a supply chain system designed to operate in this virtual way. DOO embraces the idea that a company's fulfillment challenge is a distributed, multi-enterprise problem. It enables users to manage the process and the trading partners in a uniform way and deliver a consistent user experience while operating over a heterogeneous, virtual enterprise. This is a fundamental shift at the core of managing supply chains. It forces virtual enterprises to think architecturally about how best to construct their supply chain systems.  Case in point, almost everyone has ordered from Amazon.com at one time or another. Our orders are as likely to be fulfilled by third parties as they are by Amazon itself. To deliver the order promptly and efficiently, Amazon has to send it to the right fulfillment location and know the availability in that location. It needs to be able to track status of the fulfillment and deal with exceptions. As a virtual enterprise, Amazon's operations, using thousands of trading partners, requires a very different approach to fulfillment than the traditional 'take an order and ship it from your own warehouse' model. Amazon had no choice but to develop a complex, expensive and custom solution to tackle this problem as there used to be no product solution available. Now, other companies who want to follow similar models have a better off-the-shelf choice -- Oracle Distributed Order Orchestration (DOO).  Consider how another of our customers is using our distributed orchestration solution. This major airplane manufacturer has a highly complex business and interacts regularly with the U.S. Government and major airlines. It sits in the middle of an intricate supply chain and needed to improve visibility across its many different entities. Oracle Fusion DOO gives the company an orchestration mechanism so it could improve quality, speed, flexibility, and consistency without requiring an organ transplant of these highly complex legacy systems. Many retailers face the challenge of dealing with brick and mortar, Web, and reseller channels. They all need to be knitted together into a virtual enterprise experience that is consistent for their customers. When a large U.K. grocer with a strong brick and mortar retail operation added an online business, they turned to Oracle Fusion DOO to bring these entities together. Disturbing the Peace with Acquisitions Quite often a company's ERP system is disrupted when it acquires a new company. An acquisition can inject a new set of processes and systems -- or even introduce an entirely new business like Sun's hardware did at Oracle. This challenge has been a driver for some of our DOO customers. A large power management company is using Oracle Fusion DOO to provide the flexibility to rapidly integrate additional products and services into its central fulfillment operation. The Flip Side of Fulfillment Meanwhile, we haven't ignored similar challenges on the supply side of the equation. Specifically, how to manage complex supply in a flexible way when there are multiple trading parties involved? How to manage the supply to suppliers? How to manage critical components that need to merge in a tier two or tier three supply chain? By investing in supply orchestration solutions for the virtual enterprise, we plan to give users better visibility into their network of suppliers to help them drive down costs. We also think this technology and full orchestration process can be applied to the financial side of organizations. An example is transactions that flow through complex internal structures to minimize tax exposure. We can help companies manage those transactions effectively by thinking about the internal organization as a virtual enterprise and bringing the same solution set to this internal challenge.  The Clear Front Runner No other company is investing in solving the virtual enterprise supply chain issues like Oracle is. Oracle is in a unique position to become the gold standard in this market space. We have the infrastructure of Oracle technology. We already have an Oracle Fusion DOO application which embraces the best of what's required in this area. And we're absolutely committed to extending our Fusion solution to other use cases and delivering even more business value.

    Read the article

< Previous Page | 419 420 421 422 423 424 425 426 427 428 429 430  | Next Page >