Search Results

Search found 26764 results on 1071 pages for 'map control'.

Page 739/1071 | < Previous Page | 735 736 737 738 739 740 741 742 743 744 745 746  | Next Page >

  • How employable am I as a programmer?

    - by dsimcha
    I'm currently a Ph.D. student in Biomedical Engineering with a concentration in computational biology and am starting to think about what I want to do after graduate school. I feel like I've accumulated a lot of programming skills while in grad school, but taken a very non-traditional path to learning all this stuff. I'm wondering whether I would have an easy time getting hired as a programmer and could fall back on that if I can't find a good job directly in my field, and if so whether I would qualify for a more prestigious position than "code monkey". Things I Have Going For Me Approximately 4 years of experience programming as part of my research. I believe I have a solid enough grasp of the fundamentals that I could pick up new languages and technologies pretty fast, and could demonstrate this in an interview. Good math and statistics skills. An extensive portfolio of open source work (and the knowledge that working on these projects implies): I wrote a statistics library in D, mostly from scratch. I wrote a parallelism library (parallel map, reduce, foreach, task parallelism, pipelining, etc.) that is currently in review for adoption by the D standard library. I wrote a 2D plotting library for D against the GTK Cairo backend. I currently use it for most of the figures I make for my research. I've contributed several major performance optimizations to the D garbage collector. (Most of these were low-hanging fruit, but it still shows my knowledge of low-level issues like memory management, pointers and bit twiddling.) I've contributed lots of miscellaneous bug fixes to the D standard library and could show the change logs to prove it. (This demonstrates my ability read other people's code.) Things I Have Going Against Me Most of my programming experience is in D and Python. I have very little to virtually no experience in the more established, "enterprise-y" languages like Java, C# and C++, though I have learned a decent amount about these languages from small, one-off projects and discussions about language design in the D community. In general I have absolutely no knowledge of "enterprise-y" technlogies. I've never used a framework before, possibly because most reusable code for scientific work and for D tends to call itself a "library" instead. I have virtually no formal computer science/software engineering training. Almost all of my knowledge comes from talking to programming geek friends, reading blogs, forums, StackOverflow, etc. I have zero professional experience with the official title of "developer", "software engineer", or something similar.

    Read the article

  • Cannot connect to telnet server

    - by BloodPhilia
    So, I can't use telnet to connect to any server but it works fine from a different computer. It just says it can't connect. I tried the following things: Disable firewall and AV protection. (Basically, there was no security feature left online) Telnet is set to "Trusted" in my AV protection. (Kaspersky Internet Security 2011) Using Putty to telnet, but apparently Putty's connection is also inhibited. (Says it can't connect to host) Disabling the telnet client in Control Panel and then re-enabling it. (Windows 7 Ultimate) hosts file is clean. Checked for nasties using MBAM and KIS 2011 as well as going though my HijackThis logs, nothing found. I can connect to the same machines/servers through the web browser, ping, tracert, etc. Only telnet seems to be blocked. Any other thoughts?

    Read the article

  • What's a way to implement a flexible buff/debuff system?

    - by gkimsey
    Overview: Lots of games which RPG-like statistics allow for character "buffs", ranging from simple "Deal 25% extra damage" to more complicated things like "Deal 15 damage back to attackers when hit." The specifics of each type of buff aren't really relevant. I'm looking for a (presumably object-oriented) way to handle arbitrary buffs. Details: In my particular case, I have multiple characters in a turn-based battle environment, so I envisioned buffs being tied to events like "OnTurnStart", "OnReceiveDamage", etc. Perhaps each buff is a subclass of a main Buff abstract class, where only the relevant events are overloaded. Then each character could have a vector of buffs currently applied. Does this solution make sense? I can certainly see dozens of event types being necessary, it feels like making a new subclass for each buff is overkill, and it doesn't seem to allow for any buff "interactions". That is, if I wanted to implement a cap on damage boosts so that even if you had 10 different buffs which all give 25% extra damage, you only do 100% extra instead of 250% extra. And there's more complicated situations that ideally I could control. I'm sure everyone can come up with examples of how more sophisticated buffs can potentially interact with each other in a way that as a game developer I may not want. As a relatively inexperienced C++ programmer (I generally have used C in embedded systems), I feel like my solution is simplistic and probably doesn't take full advantage of the object-oriented language. Thoughts? Has anyone here designed a fairly robust buff system before?

    Read the article

  • System Restore Points

    - by Ross Lordon
    Currently I am investigating how to schedule an automatic initiation of a system restore point for all of the workstations in my office. It seems that Task Scheduler already has some nice defaults (screenshots below). Even the history for this task verifies that it is running successfully. However, when I go to Recovery in the Control Panel it only lists the System Restore Points one for every previous for only 3 weeks back even if I check the show more restore points box. Why don't the additional ones appear? Would there be a better way to implement a solution, like via group policy or a script? Is there any documentation on this online? I've been having a hard time tracking anything down except how to subvert group policy disabling this feature.

    Read the article

  • What reasons are there to reduce the max-age of a logo to just 8 days? [closed]

    - by callum
    Most websites set max-age=31536000 (1 year) on the Cache-control headers of static assets such as logo images. Examples: YouTube Yahoo Twitter BBC But there is a notable exception: Google's logo has max-age=691200 (8 days). I've checked the headers on the Google logo in the past, and it definitely used to be 1 year. (Also, it used to be part of a sprite, and now it is a standalone logo image, but that's probably another question...) What could be valid technical reasons why they would want to reduce its cache lifetime to just 8 days? Google's homepage is one of the most carefully optimised pages in the world, so I imagine there's a good reason. Edit: Please make sure you understand these points before answering: Nobody uses short max-age lifetimes to allow modifying a static asset in future. When you modify it, you just serve it at a different URL. So no, it's nothing to do with Google doodles. Think about it: even if Google didn't understand this basic trick of HTTP, 8 days still wouldn't be appropriate, as only those users who don't have the original logo cached would see the doodle on doodle-day – and then that group of users would go on seeing the doodle for the following 8 days after Google changed it back :) Web servers do not worry about "filling up" the caches of clients (or proxies). The client manages this by itself – when it hits its own storage limit, it just starts dropping the lowest priority items to make space for new items. The priority score is based on the question "How likely am I to benefit from having cached this URL?", which is nothing to do with what max-age value the server sent when the URL was originally requested; it's a heuristic based on the "frecency" of requests for that URL. The max-age simply lets the server set a cut-off point – the time at which the client is supposed to discard the item regardless of how often it's being re-used. It would be very nice and trusting of a downstream client/proxy to rely on all origin servers "holding back" from filling up their caches, but I don't think we live in that world ;)

    Read the article

  • Ubuntu 10.10: getting appropiate monitor resolution for lcd hdtv

    - by lurscher
    I'm running Ubuntu 10.10 x86_64 version, with Nvidia 9800 GT, installed 270.41.06 Nvidia drivers following this guide. I have a LG42LH30FR LCD TV connected with the dvi link - RGB PC input I'm able to get 1024x768 resolution without overscan (I can get 1080i = 1366x768 but there is a lot of hidden screen space to the right and I don't know what to do about it). I want to get full HD I can get full HD (1080p = 1920x1080) on Windows XP 64-bit with custom resolution created with Nvidia Control Panel, from reading over xorg.conf configurations it seems I need to add a certain modeling to the monitor configuration, but I don't know where to get the appropriate options for this task any suggestions?

    Read the article

  • Effective handling of variables in non-object oriented programming

    - by srnka
    What is the best method to use and share variables between functions in non object-oriented program languages? Let's say that I use 10 parameters from DB, ID and 9 other values linked to it. I need to work with all 10 parameters in many functions. I can do it next ways: 1. call functions only with using ID and in every function get the other parameters from DB. Advantage: local variables are clear visible, there is only one input parameter to function Disadvantage: it's slow and there are the same rows for getting parameters in every function, which makes function longer and not so clear 2. call functions with all 10 parameters Advantage: working with local variables, clear function code Disadvantage: many input parameters, what is not nice 3. getting parameters as global variables once and using them everywhere Advantage - clearer code, shorter functions, faster processing Disadvantage - global variables - loosing control of them, possibility of unwanted overwriting (Especially when some functions should change their values) Maybe there is some another way how to implement this and make program cleaner and more effective. Can you say which way is the best for solving this issue?

    Read the article

  • Scenes from OpenWorld Day One

    - by Larry Wake
    Sunday's the day that everything comes together, but there's always that last minute scramble. Here are a few peeks at what everyone's doing, and may still be doing far into the night. This is the team putting the final touches on the Hands-On Lab room for  HOL10201, "Reduce Risk with Oracle Solaris Access Control to Restrain Users and Isolate Applications". This should be a great learning experience--plus it's a chance to meet up with some of the top Solaris security people, including Glenn Faden and Darren Moffat. And here's the OTN Garage's own Rick Ramsey, working feverishly to help set up the Oracle Solaris Systems Pavilion. (Moscone South, Booth 733). Several of our featured partners will be demonstrating solutions running on Oracle Solaris systems -- plus, we'll be serving espresso, to help you power through the week. Another panorama shot, courtesy of iOS 6 -- come for the maps, stay for the photos.... Moscone South is also home once again this year to the systems and storage DEMOgrounds. Plenty to learn and see; you might even catch a glimpse of me there on Tuesday afternoon.

    Read the article

  • YSLow says certain CSS are not gzipped

    - by rhand
    YSlow keeps on telling me files like http://www.example.com/wp-content/plugins/q-and-a/css/q-a-plus.css?ver=1.0.6.2 are not gzipped while the gzip test tool at Feed the Bot mentions I am all good: Compressed? Yes Compression type gzip Page size (Bytes) 32,493 Compressed size (Bytes) -1 Saving (Bytes) 32,494 Compression % 100% I added this to my .htaccess: # Gzip <ifModule mod_gzip.c> mod_gzip_on Yes mod_gzip_dechunk Yes mod_gzip_item_include file .(html?|txt|css|js|php|pl)$ mod_gzip_item_include handler ^cgi-script$ mod_gzip_item_include mime ^text/.* mod_gzip_item_include mime ^application/x-javascript.* mod_gzip_item_exclude mime ^image/.* mod_gzip_item_exclude rspheader ^Content-Encoding:.*gzip.* </ifModule> #Deflate <ifmodule mod_deflate.c> AddOutputFilterByType DEFLATE text/text text/html text/plain text/xml text/css application/x-javascript application/javascript </ifmodule> The header for the file mentioned states: CF-Cache-Status MISS CF-RAY 13945df90a9a0c1d-AMS Cache-Control public, max-age=2592000 Connection keep-alive Content-Encoding gzip Content-Type application/javascript Date Thu, 12 Jun 2014 07:34:38 GMT Expires Sat, 12 Jul 2014 07:34:38 GMT Last-Modified Thu, 21 Feb 2013 01:29:18 GMT Server cloudflare-nginx Transfer-Encoding chunked Vary Accept-Encoding Any ideas what I am missing here?

    Read the article

  • How to manage a home-grown YUM package repo?

    - by TomOnTime
    There are plenty of websites that explain how to manage a mirror of YUM repos. I want to run a repo for my home-grown packages. Is there a good way to manage such repos? What I need to do: Manage 3 repos: unstable, testing, stable Self-service functions that let users add/remove/promote packages (promote means moving a package unstable?testing or testing-stable). ACLs that control which users/groups may add/remove/promote packages. Automatically re-sign packages as they move repo to repo (since the GPG key for "stable" should be different than "unstable") Automatically run "createrepo" to update repodata when needed. Suggestions?

    Read the article

  • Which home automation technology to choose? x10 zwave e.t.c. in UK [closed]

    - by Stewart Robinson
    I work away most of the week and have to leave my house empty. I would like to have home automation monitor and control my house. Naturally I want to be a bit of a geek and do some of it myself so I've researched stuff like x10, zwave, cbus e.t.c. but I want opinions on which I should use in my house. I have a Linux box that could be used to actually do controlling if needed. So which technology and why?

    Read the article

  • Are the only types of data "sources" static and dynamic?

    - by blunders
    Thinking that there might be others, but not sure -- but before getting into that, let me explain what I mean by static and dynamic data sources. Static (or datastore) - Meaning that the data's state is non-changing, and if was changed, that would be a new state, and the old data would be considered stateless; meaning it no longer is known to exist, or not exist. Another way of possibly looking at a static data source might be that if read and written back without modification, the checksum for before and after should be exactly the same regardless of the duration of time between the reading and rewriting of the data. Examples: Photos, Files, Database Record, Dynamic (or datastream) - Meaning that the data's state is known to be in flux, and never expected to be the same per input. Example: Live video/audio feed, Stock Market feed, First let me say, the above is a very loose mapping of the concepts, and I'd welcome any feedback. Next, onto the core of the question, that being are these the only two types of data sources. My guess, is that yes, they are -- but that there are hybrid versions of the two. That being, streaming data that has a fixed state. For example, the data being streamed has a checksum given and each unique checksum is known to be a single instance of static data. On the flip side, static data could be chained via say a version control system; when played back, each version might be viewed as a segment of a stream; thing is, the very fact that it can be played back makes the data source static. Another type might be that the data source is being organically discovered, and it's simply unknown what the state is. Questions, feedback, requests -- just comment, thanks!!

    Read the article

  • How should I structure my turn based engine to allow flexibility for players/AI and observation?

    - by Reefpirate
    I've just started making a Turn Based Strategy engine in GameMaker's GML language... And I was cruising along nicely until it came time to handle the turn cycle, and determining who is controlling what player, and also how to handle the camera and what is displayed on screen. Here's an outline of the main switch happening in my main game loop at the moment: switch (GameState) { case BEGIN_TURN: // Start of turn operations/routines break; case MID_TURN: switch (PControlledBy[Turn]) { case HUMAN: switch (MidTurnState) { case MT_SELECT: // No units selected, 'idle' UI state break; case MT_MOVE: // Unit selected and attempting to move break; case MT_ATTACK: break; } break; case COMPUTER: // AI ROUTINES GO HERE break; case OBSERVER: // OBSERVER ROUTINES GO HERE break; } break; case END_TURN: // End of turn routines/operations, and move Turn to next player break; } Now, I can see a couple of problems with this set-up already... But I don't have any idea how to go about making it 'right'. Turn is a global variable that stores which player's turn it is, and the BEGIN_TURN and END_TURN states make perfect sense to me... But the MID_TURN state is baffling me because of the things I want to happen here: If there are players controlled by humans, I want the AI to do it's thing on its turn here, but I want to be able to have the camera follow the AI as it makes moves in the human player's vision. If there are no human controlled player's, I'd like to be able to watch two or more AI's battle it out on the map with god-like 'observer' vision. So basically I'm wondering if there are any resources for how to structure a Turn Based Strategy engine? I've found lots of writing about pathfinding and AI, and those are all great... But when it comes to handling the turn structure and the game states I am having trouble finding any resources at all. How should the states be divided to allow flexibility between the players and the controllers (HUMAN, COMPUTER, OBSERVER)? Also, maybe if I'm on the right track I just need some reassurance before I lay down another few hundred lines of code...

    Read the article

  • How to disable all window borders from the VLC playback window

    - by Rob Thomas
    Is there a way to make the VLC playback window completely borderless (no title bar, no other borders)? Ideally, I would like the playback window to be completely borderless and then a separate window that has the controls (play, pause, timeline control, etc). UPDATES: I cannot use full screen mode, I would like playback window to be sized the same as the video, which is usually about 300x300 px. Also, I need to be able to position the window anywhere on the desktop. I'm using the Windows version of VLC.

    Read the article

  • Faster, Simpler access to Azure Tables with Enzo Azure API

    - by Herve Roggero
    After developing the latest version of Enzo Cloud Backup I took the time to create an API that would simplify access to Azure Tables (the Enzo Azure API). At first, my goal was to make the code simpler compared to the Microsoft Azure SDK. But as it turns out it is also a little faster; and when using the specialized methods (the fetch strategies) it is much faster out of the box than the Microsoft SDK, unless you start creating complex parallel and resilient routines yourself. Last but not least, I decided to add a few extension methods that I think you will find attractive, such as the ability to transform a list of entities into a DataTable. So let’s review each area in more details. Simpler Code My first objective was to make the API much easier to use than the Azure SDK. I wanted to reduce the amount of code necessary to fetch entities, remove the code needed to add automatic retries and handle transient conditions, and give additional control, such as a way to cancel operations, obtain basic statistics on the calls, and control the maximum number of REST calls the API generates in an attempt to avoid throttling conditions in the first place (something you cannot do with the Azure SDK at this time). Strongly Typed Before diving into the code, the following examples rely on a strongly typed class called MyData. The way MyData is defined for the Azure SDK is similar to the Enzo Azure API, with the exception that they inherit from different classes. With the Azure SDK, classes that represent entities must inherit from TableServiceEntity, while classes with the Enzo Azure API must inherit from BaseAzureTable or implement a specific interface. // With the SDK public class MyData1 : TableServiceEntity {     public string Message { get; set; }     public string Level { get; set; }     public string Severity { get; set; } } //  With the Enzo Azure API public class MyData2 : BaseAzureTable {     public string Message { get; set; }     public string Level { get; set; }     public string Severity { get; set; } } Simpler Code Now that the classes representing an Azure Table entity are defined, let’s review the methods that the Azure SDK would look like when fetching all the entities from an Azure Table (note the use of a few variables: the _tableName variable stores the name of the Azure Table, and the ConnectionString property returns the connection string for the Storage Account containing the table): // With the Azure SDK public List<MyData1> FetchAllEntities() {      CloudStorageAccount storageAccount = CloudStorageAccount.Parse(ConnectionString);      CloudTableClient tableClient = storageAccount.CreateCloudTableClient();      TableServiceContext serviceContext = tableClient.GetDataServiceContext();      CloudTableQuery<MyData1> partitionQuery =         (from e in serviceContext.CreateQuery<MyData1>(_tableName)         select new MyData1()         {            PartitionKey = e.PartitionKey,            RowKey = e.RowKey,            Timestamp = e.Timestamp,            Message = e.Message,            Level = e.Level,            Severity = e.Severity            }).AsTableServiceQuery<MyData1>();        return partitionQuery.ToList();  } This code gives you automatic retries because the AsTableServiceQuery does that for you. Also, note that this method is strongly-typed because it is using LINQ. Although this doesn’t look like too much code at first glance, you are actually mapping the strongly-typed object manually. So for larger entities, with dozens of properties, your code will grow. And from a maintenance standpoint, when a new property is added, you may need to change the mapping code. You will also note that the mapping being performed is optional; it is desired when you want to retrieve specific properties of the entities (not all) to reduce the network traffic. If you do not specify the properties you want, all the properties will be returned; in this example we are returning the Message, Level and Severity properties (in addition to the required PartitionKey, RowKey and Timestamp). The Enzo Azure API does the mapping automatically and also handles automatic reties when fetching entities. The equivalent code to fetch all the entities (with the same three properties) from the same Azure Table looks like this: // With the Enzo Azure API public List<MyData2> FetchAllEntities() {        AzureTable at = new AzureTable(_accountName, _accountKey, _ssl, _tableName);        List<MyData2> res = at.Fetch<MyData2>("", "Message,Level,Severity");        return res; } As you can see, the Enzo Azure API returns the entities already strongly typed, so there is no need to map the output. Also, the Enzo Azure API makes it easy to specify the list of properties to return, and to specify a filter as well (no filter was provided in this example; the filter is passed as the first parameter).  Fetch Strategies Both approaches discussed above fetch the data sequentially. In addition to the linear/sequential fetch methods, the Enzo Azure API provides specific fetch strategies. Fetch strategies are designed to prepare a set of REST calls, executed in parallel, in a way that performs faster that if you were to fetch the data sequentially. For example, if the PartitionKey is a GUID string, you could prepare multiple calls, providing appropriate filters ([‘a’, ‘b’[, [‘b’, ‘c’[, [‘c’, ‘d[, …), and send those calls in parallel. As you can imagine, the code necessary to create these requests would be fairly large. With the Enzo Azure API, two strategies are provided out of the box: the GUID and List strategies. If you are interested in how these strategies work, see the Enzo Azure API Online Help. Here is an example code that performs parallel requests using the GUID strategy (which executes more than 2 t o3 times faster than the sequential methods discussed previously): public List<MyData2> FetchAllEntitiesGUID() {     AzureTable at = new AzureTable(_accountName, _accountKey, _ssl, _tableName);     List<MyData2> res = at.FetchWithGuid<MyData2>("", "Message,Level,Severity");     return res; } Faster Results With Sequential Fetch Methods Developing a faster API wasn’t a primary objective; but it appears that the performance tests performed with the Enzo Azure API deliver the data a little faster out of the box (5%-10% on average, and sometimes to up 50% faster) with the sequential fetch methods. Although the amount of data is the same regardless of the approach (and the REST calls are almost exactly identical), the object mapping approach is different. So it is likely that the slight performance increase is due to a lighter API. Using LINQ offers many advantages and tremendous flexibility; nevertheless when fetching data it seems that the Enzo Azure API delivers faster.  For example, the same code previously discussed delivered the following results when fetching 3,000 entities (about 1KB each). The average elapsed time shows that the Azure SDK returned the 3000 entities in about 5.9 seconds on average, while the Enzo Azure API took 4.2 seconds on average (39% improvement). With Fetch Strategies When using the fetch strategies we are no longer comparing apples to apples; the Azure SDK is not designed to implement fetch strategies out of the box, so you would need to code the strategies yourself. Nevertheless I wanted to provide out of the box capabilities, and as a result you see a test that returned about 10,000 entities (1KB each entity), and an average execution time over 5 runs. The Azure SDK implemented a sequential fetch while the Enzo Azure API implemented the List fetch strategy. The fetch strategy was 2.3 times faster. Note that the following test hit a limit on my network bandwidth quickly (3.56Mbps), so the results of the fetch strategy is significantly below what it could be with a higher bandwidth. Additional Methods The API wouldn’t be complete without support for a few important methods other than the fetch methods discussed previously. The Enzo Azure API offers these additional capabilities: - Support for batch updates, deletes and inserts - Conversion of entities to DataRow, and List<> to a DataTable - Extension methods for Delete, Merge, Update, Insert - Support for asynchronous calls and cancellation - Support for fetch statistics (total bytes, total REST calls, retries…) For more information, visit http://www.bluesyntax.net or go directly to the Enzo Azure API page (http://www.bluesyntax.net/EnzoAzureAPI.aspx). About Herve Roggero Herve Roggero, Windows Azure MVP, is the founder of Blue Syntax Consulting, a company specialized in cloud computing products and services. Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" from Apress and runs the Azure Florida Association (on LinkedIn: http://www.linkedin.com/groups?gid=4177626). For more information on Blue Syntax Consulting, visit www.bluesyntax.net.

    Read the article

  • Session Tracking - Advantages and Disadvantages

    I use to work for a major internet company that sold Dental Plans to customers through their large customer-driven websiteto consumers. They start tracking their users as soon as they hit their web servers, and then they log everything they can about the user. There are a lot of benefits for using session tracking for both the user and the website. Users can benefit from session tracking due to the fact that a website can retain pertaining information for the user so that they do not have to re-enter the same information repeatedly. In addition, websites can hold specific items in a cart for each user so that they can pay for all of their  items at once when they are ready to complete their purchases. Websites can also benefit from session tracking because they can determine where a specific user came from and which advertising partner gave them a sale. This information is very useful when deciding on where to spend an advertising budget. There is only one real disadvantage when it comes to session tracking, Users can not really control what is actually tracked by a website. Yes, they can disable cookies and this will help, but that means that no tracking can be done at all. Most sites require users to have cookies enabled in order for users to make purchases or login to their accounts.  

    Read the article

  • Web based console connections not working in Windows 7 posted: Jan 20, 2010 8:55 AM

    - by nmeth
    For slightly complicated reasons we tend to give people console access to VMs via the webui. This has worked fine in the past, however when the users update their client machines to Windows 7 (or Vista, I am told, although I have not tested that), then the console fails to work. On IE8, having allowed the ActiveX control, the tab causes a "Internet Explorer has stopped working" dialog. On Firefox 3.5 , once the plugin has been installed, using the console causes the browser to crash. I've updated to the most recent VC 2.5 release, and ESX 3.5u5. Anyone else seeing this? Any clues how to get round it (other than using the fat client). Nigel.

    Read the article

  • why are my players drawn top the side of my viewport

    - by Jetbuster
    Following this admittedly brilliant and clean 2d camera class I have a camera on each player, and it works for multiplayer and i've divided the screen into two sections for split screen by giving each camera a viewport. However in the game it looks like this I'm not sure if thats their position relative to the screen or what The relevant gameScreen code, the makePlayers is setup so it could theoretically work for up to 4 players private void makePlayers() { int rowCount = 1; if (NumberOfPlayers > 2) rowCount = 2; players = new Player[NumberOfPlayers]; for (int i = 0; i < players.Length; i++) { int xSize = GameRef.Window.ClientBounds.Width / 2; int ySize = GameRef.Window.ClientBounds.Height / rowCount; int col = i % rowCount; int row = i / rowCount; int xPoint = 0 + xSize * row; int yPoint = 0 + ySize * col; Viewport viewport = new Viewport(xPoint, yPoint, xSize, ySize); Vector2 playerPosition = new Vector2(viewport.TitleSafeArea.X + viewport.TitleSafeArea.Width / 2, viewport.TitleSafeArea.Y + viewport.TitleSafeArea.Height / 2); players[i] = new Player(playerPosition, playerSprites[i], GameRef, viewport); } //players[1].Keyboard = true; } public override void Draw(GameTime gameTime) { base.Draw(gameTime); foreach (Player player in players) { GraphicsDevice.Viewport = player.PlayerCamera.ViewPort; GameRef.spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend, SamplerState.PointClamp, null, null, null, player.PlayerCamera.Transform); map.Draw(GameRef.spriteBatch); // Draw the Player player.Draw(GameRef.spriteBatch); // Draw UI screen elements GraphicsDevice.Viewport = Viewport; ControlManager.Draw(GameRef.spriteBatch); GameRef.spriteBatch.End(); } } the player's initialize and draw methods are like so internal void Initialize() { this.score = 0; this.angle = (float)(Math.PI * 0 / 180);//Start sprite at it's default rotation int width = utils.scaleInt(picture.Width, imageScale); int height = utils.scaleInt(picture.Height, imageScale); this.hitBox = new HitBox(new Vector2(centerPos.X - width / 2, centerPos.Y - height / 2), width, height, Color.Black, game.Window.ClientBounds); playerCamera.Initialize(); } #region Methods public void Draw(SpriteBatch spriteBatch) { //Console.WriteLine("Hitbox: X({0}),Y({1})", hitBox.Points[0].X, hitBox.Points[0].Y); //Console.WriteLine("Image: X({0}),Y({1})", centerPos.X, centerPos.Y); Vector2 orgin = new Vector2(picture.Width / 2, picture.Height / 2); hitBox.Draw(spriteBatch); utils.DrawCrosshair(spriteBatch, Position, game.Window.ClientBounds, Color.Red); spriteBatch.Draw(picture, Position, null, Color.White, angle, orgin, imageScale, SpriteEffects.None, 0.1f); } as I said I think I'm gonna need to do something with the render position but I'm to entirely sure what or how it would be elegant to say the least

    Read the article

  • Team Software Development using Ruby on Rails

    - by Panoy
    I used to work alone on small to medium sized programming projects before and have no experience working in a team environment. Currently, there will be 3 of us in an in-house software development team that is tasked to develop a number of software for an academic institution. We have decided to use the web for the majority of the projects and are planning to choose Ruby on Rails for this and I would like to ask for your inputs, advices and approaches with regards to software development as a team using the RoR web framework. One thing that has really confounded me is how you divide the programming tasks of a project if there are 3 of you that are really doing the coding. It’s obvious that we as developers approach a problem in a modular way and finish it one after another. If the project consists of 3 modules, should each one of us focus on each of those modules? Would it be faster that way? How about if the 3 of us would focus on one module first (that’s what I really prefer). Is using a distributed version control system such as Git the answer to this type of problem? Please don’t forget to put your tips and experiences with regards to team software development. Cheers!

    Read the article

  • Samba as a PDC and offline authentication

    - by Aimé Barteaux
    Say I have a Windows laptop which has been connected to a domain. The domain has a Samba server as a PDC. Now say that I move the laptop outside of the network (the network is completely inaccessible). Will I be able to logon into accounts I have accessed before on the laptop (through GINA)? Update: Looking at the smb.comf documentation I noticed the setting winbind offline logon: This parameter is designed to control whether Winbind should allow to login with the pam_winbind module using Cached Credentials. If enabled, winbindd will store user credentials from successful logins encrypted in a local cache.. To me it looks like this solves the issue but can anyone else confirm it and/or point out if any additional values need to be set?

    Read the article

  • Should this folder called Data be indexed?

    - by panny
    In the indexing options of Windows 7 there is a folder called Data which is excluded from indexing for the C:\ drive by default. Can someone confirm this, please? I was not able to locate that folder on my drive, nor include it in the search index. The difference in number of indexed files is unsatisfying: windows-7 native indexing service:377703 files on six drives; third party desktop search indexing service:698654 files on the same number of drives. Files in UA Control seem not being indexed without proper priviledges. How can this be circumvented?

    Read the article

  • Need advice for approach for a web-based app that loads excel worksheet but exposes only the charts

    - by John
    I'm looking for suggestions on the Visual Studio approach to take for a web application that is in the conceptual stage. My environment has a lot of tools: Windows Server 2008 R2 Standard 64bit Visual Studio 2010 Professional Edition Sharepoint 2010 Server Enterprise Edition SQL Server 2008 R2 Office 2010 Professional I know I will need this app to retrieve data from a database (or a web service - not sure exactly at this point). The data needs to be placed in an Excel workbook dynamically. The app will need to have a nice user interface (standard web controls - perhaps with some Javascript effects). The Excel ribbon and worksheet grid will need to be hidden. Some web control(s) will cause the Excel chart(s) to be rendered. I am thinking this sounds like Visual Studio Tools for Office (VSTO) so as to leverage .Net and hide Excel. Can you offer suggestions regarding: One ASP.Net Web App Project One Class Library Project for Excel or perhaps which one of the several different Excel 2010 project types (addin, template, document) Would Excel Services for Sharepoint be useful (or required) ? I am feeling a little overwhelmed with so many choices at this early stage of conceptualizing the app. Can you suggest some ideas for this sort of thing? Also, I am a bit more experienced with C# but I've read VB.Net is better for work with the Excel object model. What are general advises with regard to tool choice and overall approach tradeoffs?

    Read the article

  • Creative software leftovers - ASIO error message

    - by Tony Patriarche
    I temporarily installed an old Creative Audigy 2 soundboard on my Vista x64 Home Premium computer. Bad idea! I uninstalled the board & all software visible on the control panel. Now, with one particular app. (Sibelius) I keep getting a start-up message "CTASIO Warning: Creative ASIO: there are no Creative audio products installed on the system that support ASIO". I offer this as a candidate for "Most useless message", but that's beside the point. I used a commercial registry cleaner (PC-Tools Registry Mechanic) and then edited the registry looking for "creative", "audigy" and "ASIO". After removing everything I could find, I still get the message. Any suggestions?

    Read the article

  • ArchBeat Link-o-Rama for 2012-09-12

    - by Bob Rhubart
    15 Lessons from 15 Years as a Software Architect | Ingo Rammer In this presentation from the GOTO Conference in Copenhagen, Ingo Rammer shares 15 tips regarding people, complexity and technology that he learned doing software architecture for 15 years. Adding a runtime picker to a taskflow parameter in WebCenter | Yannick Ongena Oracle ACE Yannick Ongena shows how to create an Oracle WebCenter popup to allow users to "select items or do more complex things." Oracle Identity Manager 11g R2 Catalog | Daniel Gralewski Oracle Fusion Middleware A-Team blogger Daniel Gralewski shares a detailed overview of the new Catalog feature, one of the most talked about features in the latest release of Oracle Identity Manager 11g. Cloud API and service designers, stop thinking small | Cloud Computing - InfoWorld "The focus must shift away from fine-grained APIs that provide some type of primitive service, such as pushing data to a block of storage or perhaps making a request to a cloud-rooted database," says InfoWorld's David Linthicum. "To go beyond primitives, you must understand how these services should be used in a much larger architectural context. In other words, you need to understand how businesses will employ these services to form real workplace solutions -- inside and outside the enterprise." Oracle Solaris 8 P2V with Oracle database 10.2 and ASM | Orgad Kimchi Orgad Kimchi's technical post illustrates the migration of "a Solaris 8 physical system, with Oracle database version 10.2.0.5 with ASM file-system located on a SAN storage, into a Solaris 8 branded zone inside a Solaris 10 guest domain on top of a Solaris 11 control domain." Thought for the Day "The hardest single part of building a software system is deciding precisely what to build. " — Fred Brooks Source: SoftwareQuotes.com

    Read the article

  • fail2ban on server with LXC Containers

    - by RoboTamer
    The issue is modprobe and iptables don't work inside an LXC Container. LXC is the userspace control package for Linux Containers, a lightweight virtual system mechanism sometimes described as “chroot on steroids”. iptables error inside the container is: # iptables -I INPUT -s 122.129.126.194 -j DROP > iptables v1.4.8: can't initialize iptables table `filter': Table does not exist (do you need to insmod?) Perhaps iptables or your kernel needs to be upgraded. I am guessing that it can't work because the LXC containers share one kernel, the main server kernel. How do I do fail2ban in this case. modprobe and iptables work in the main server so I could install it there and link to the logfiles somehow, my guess? Any suggestions?

    Read the article

< Previous Page | 735 736 737 738 739 740 741 742 743 744 745 746  | Next Page >