Search Results

Search found 10536 results on 422 pages for 'cpu usage'.

Page 270/422 | < Previous Page | 266 267 268 269 270 271 272 273 274 275 276 277  | Next Page >

  • Rendering a big game universe - bitmaps or vector graphics?

    - by user1641923
    I am new to an Android development, though I have much experience with Java, C++, PHP programming and a bit experience with vector graphics too (basic 3d Studio Max, Flash, etc). I am starting to work on an Android game. It is going to be a 2D space shooter/RPG, and I am not going to use any game engines and any 3D party libs. I really want to create a very large game universe, or even pseudo-infinite (without visible borders, as if it were a 2D projection of a sphere). It should include 10-12 clusters of 7-8 planets/other space objects and random amount of single asteroids/comets, which player can interact with and also not interactive background. I am looking for a least complicated aproach to create such a universe. My current ideas are: Simply create bitmaps with space scenery background so that they can be tiled seamlessly repeated and construct my 2D universe of this tiles, then place interactive objects (planets, other spaceships) on it. Using vector graphics. I would have a solid color background, some random background objects and gradients here and there. My problems here: Lack of knowledge of how well vector graphics is integrated in Android. Performance? Memory usage? Does Android manage big bitmaps well? Do all of the bitmaps have to be in memory during all game process? I am interested in technical details regarding each of the ideas and a suggestion, which I should go with.

    Read the article

  • Using Google App Engine to Perform World Updates vs an Authoritative Server

    - by Error 454
    I am considering different game server architectures that use GAE. The types of games I am considering are turn-based where the world status would need to be updated about once per minute. I am looking for an answer that persuades me to either perform the world update on the google servers OR an authoritative server that syncs with the datastore. The main goal here would be to minimize GAE daily quotas. For some rough numbers, I am assuming 10,000 entities requiring updates. Each entity update would require: Reading 5 private entity variables (fetched from datastore) Fetching as many as 20 static variables (from datastore or persisted in server memory) Writing 5 entity variables Clients of the game would authenticate and set state directly against GAE as well as pull the latest world state from GAE. Running the update on GAE would consist of a cron job launched every minute. This would update all of the entities and save the results to the datastore. This would be more CPU intensive for GAE. Running the update on an authoritative server would consist of fetching entity data from the GAE datastore, calculating the new entity states and pushing the new state variables back to the datastore. This would be more bandwidth intensive for the datastore.

    Read the article

  • Linux distro for software development support?

    - by Xie Jilei
    I've spent too much time on setup & maintain a development server, which contains following tools: Common services like SSH, BIND, rsync, etc. Subversion, Git. Apache server, which runs CGit, Trac, Webmin, phpmyadmin, phppgadmin, etc. Jetty, which runs Archiva and Hudson. Bugzilla. PostgresSQL server, MySQL server. I've created a lot of Debian packages, like my-trac-utils, my-bugzilla-utils, my-bind9-utils, my-mysql-utils, etc. to make my life more convenient. However, I still feel I need a lot more utils. And I've spent a lot of time to maintain these packages, too. I think there maybe many developers doing the same things. As tools like subversion, git, trac are so common today. It's not to hard to install and configure each of them, but it took a long time to install them all. And it's time consuming to maintain them. Like backup the data, plot the usage graph and generate web reports. (gitstat for example) So, I'd like to hear if there exist any pre-configured distro for Development Server purpose, i.e., something like BackTrack for hackers?

    Read the article

  • Seeking solution for printing-reporting .NET

    - by Parhs
    I am developing an application that prints in separate threads in extreme cases about 20-25 pages per minute to various thermal printers. Currently templates for these are XAML xps documents. All printers have graphics drivers that support EMF/GDI printing. So GDI-EMF is done by operating system resulting in slower performance. Sending raw text for printing is another good solution but doesnt work always , because some clients have old chinese thermal printer that nobody support thus impossible to change codepage / emulation. So it doesnt work always. Also most computers running my software are low end ATOM CPU. So I am thinking to return to GDI, EMF printing and have both Text-Only reports and EMF reports. Another reason i want EMF is because here receipts are signed by Electronic Fiscal Memory device.Most of these dont do good job extracting text from XPS as they dont follow the standard but how windows convert GDI to XPS.Even with text-only mode some of them dont support all character encodings and are impossible to send paper cut command after the sign. I know that using a reporting engine would solve rendering problem but I dont want to buy one. All I want is to be able to show tabular data and insert an image and replaced text.I know there is StringTemplate that could do the generation of template but the problem is i should parse somehow the template and render it using GDI commands. Is there any other solution/approach for this ? Or is there anything ready ?

    Read the article

  • Visual Studio 2010 editor painfully slow [closed]

    - by Daniel Gehriger
    I'm running out of patience with MS VisualStudio 2010: I'm working on a solution containing ~50 C++ projects. When using the editor, I experience a lag of 1 - 2 seconds whenever I move the cursor to a different line, or when I move to a different window, or generally when the editor losses and gains focus. I went through a whole series of optimizations, to no avail: installed all hotfixes for VS2010 disabled all add-ins and extensions disabled Intellisense deleted all temporary files created by VS2010 disabled hardware acceleration unloaded all but 15 projects disabled tracking changes closed all but one window and so on. This is on a Dual Core machine with SSD harddrive (verified throughput 100MB/s), enough free space on HD, Windows 7 Pro 32-bit with 3GB of RAM and most of it still free. Whenever I type a letter, CPU usage of devenv.exe goes to 50 - 90% in process monitor for 1 - 2 seconds before returning to 5%. I used Process Explorer to analyze registry and file system access, and I only notice frequent accesses to the .sln file (which is quiet small), and a few registry reads, but nothing that would raise a red flag. I don't have this problem with solutions containing less projects, so I'm inclined to think that it's related to the number of projects. For your information, the entire solution has been migrated over the years from VS2005 to VS2008 to now VS2010. Does anyone have any ideas what else I could do to resume work on this project, other than returning to VS2008?

    Read the article

  • Book: DevOps for Developers

    - by Tori Wieldt
    We all know development and operations often act like silos, with "Just throw it over the wall!" being the battle cry. Many organizations unwittingly contribute to gaps between teams, with management by (competing) objectives; a clash of Agile practices vs. more conservative approaches; and teams using different sets of tools, such as Nginx, OpenEJB, and Windows on developers' machines and Apache, Glassfish, and Linux on production machines. At best, you've got sub-optimal collaboration, at worst, you've got the Hatfields and the McCoys.  The book DevOps for Developers helps bridge the gap between development and operations by aligning incentives and sharing approaches for processes and tools. It introduces DevOps as a modern way of bringing development and operations together. It also means to broaden the usage of Agile practices to operations to foster collaboration and streamline the entire software delivery process in a holistic way. Some single aspects of DevOps may not be new, for example, you may have used the tool Puppet for years already, but with a new mindset ("my job is not just to code, it's to serve the customer in the best way possible") and a complete set of recipes, you'll be well on your way to success. DevOps for Developers also by provides real-world use cases (e.g., how to use Kanban or how to release software). It provides a way to be successful in the real development/operations world. DevOps for Developers is written my Michael Hutterman, Java Champion, and founder of the Cologne Java User Group. "With DevOps for Developers, developers can learn to apply patterns to improve collaboration between development and operations as well as recipes for processes and tools to streamline the delivery process," Hutterman explains.

    Read the article

  • Architectural and Design Challenges with SOA

    With all of the hype about service oriented architecture (SOA) primarily through the use of web services, not much has been said about potential issues of using SOA in the design of an application. I am personally a fan of SOA, but it is not the solution for every application. Proper evaluation should be done on all requirements and use cases prior to deciding to go down the SOA road. It is important to consider how your application/service will handle the following perils as it executes. Example Challenges of SOA Network Connectivity Issues Handling Connectivity Issues Longer Processing/Transaction Times How many of us have had issues visiting our favorite web sites from time to time? The same issue will occur when using service based architecture especially if it is implemented using web services. Forcing applications to access services via a network connection introduces a lot of new failure points to the application. Potential failure points include: DNS issues, network hardware issues, remote server issues, and the lack of physical network connections. When network connectivity issues do occur, how are the service clients are implemented is very important. Should the client wait and poll the service until it is accessible again? If so what is the maximum wait time or number of attempts it should retry. Due to the fact of services being distributed across a network automatically increase the responsiveness of client applications due to the fact that processing time must now also include time to send and receive messages from called services. This could add nanoseconds to minutes per each request based on network load and server usage of the service provider. If speed highly desirable quality attribute then I would consider creating components that are hosted where the client application is located. References: Rader, Dave. (2002). Overcoming Web Services Challenges with Smart Design: http://soa.sys-con.com/node/39458

    Read the article

  • Are bad backlinks causing thousands of 404 and 410 errors in webmaster tools?

    - by Natália
    Our webmaster tools account is showing 250.000 errors related with weird links from other sites. These URLs are comming mostly from non existent sites or are being generated directly by our website. Here some examples of these URLs: oursite.com/&q=videos+caseros+sexo+pornos+gratis&sa=X&ei=R638T8eTO8WphAfF2vG8Bg&ved=0CCAQFjAC%2F%2Fpage%2F2%2Fpage%2F3%2Fpage%2F4%2Fpage%2F3%2Fpage%2F4%2Fpage%2F3%2Fpage%2F4%2Fpage%2F5%2Fpage%2F4/page/3 Our site is a popular spanish adult site, yet we don´t have keywords which are being mentioned in this URL. Apparently this link comes from our site. Some more examples: oursite.com/&q=losmejoresvideosporno&sa=X&ei=U__8T-BnqK7RBdjmhYsH&ved=0CBUQFjAA%2F%2Fpage%2F2%2Fpage%2F3%2Fpage%2F2%2Fpage%2F3%2Fpage%2F2%2Fpage%2F3%2Fpage%2F4%2Fpage%2F3%2Fpage%2F2%2Fpage%2F3/page/4 Once again: not our queries, not out URLs. oursite/tag/tetonas We think that it might be other site, which is having a policy of extremely bad SEO based on other sites branding and keywords usage: thirdsite/buscador/tetonas-oursite The question is: if other sites are generating these URLs, how can we prevent this? Why the tag is being generated if no link was added to the other site? What should we do with these errors? 301? 410 gone? I have read all similar Q&A here but none of them seems to solve our problem. It is not likely to be a bad ad (Inspected them all). Maybe some all content which google decided to recrawl suddenly? Maybe third parties bad SEO policy? Maybe all of them?

    Read the article

  • Visual Studio 2010 editor painfully slow

    - by Daniel Gehriger
    I'm running out of patience with MS VisualStudio 2010: I'm working on a solution containing ~50 C++ projects. When using the editor, I experience a lag of 1 - 2 seconds whenever I move the cursor to a different line, or when I move to a different window, or generally when the editor losses and gains focus. I went through a whole series of optimizations, to no avail: installed all hotfixes for VS2010 disabled all add-ins and extensions disabled Intellisense deleted all temporary files created by VS2010 disabled hardware acceleration unloaded all but 15 projects disabled tracking changes closed all but one window and so on. This is on a Dual Core machine with SSD harddrive (verified throughput 100MB/s), enough free space on HD, Windows 7 Pro 32-bit with 3GB of RAM and most of it still free. Whenever I type a letter, CPU usage of devenv.exe goes to 50 - 90% in process monitor for 1 - 2 seconds before returning to 5%. I used Process Explorer to analyze registry and file system access, and I only notice frequent accesses to the .sln file (which is quiet small), and a few registry reads, but nothing that would raise a red flag. I don't have this problem with solutions containing less projects, so I'm inclined to think that it's related to the number of projects. For your information, the entire solution has been migrated over the years from VS2005 to VS2008 to now VS2010. Does anyone have any ideas what else I could do to resume work on this project, other than returning to VS2008?

    Read the article

  • Desktop application, dependency injection

    - by liori
    I am thinking of applying a real dependency injection library to my toy C#/GTK# desktop application. I chose NInject, but I think this is irrelevant to my question. There is a database object, a main window and several utility window classes. It's clear that I can inject the database into every window object, so here DI is useful. But does it make sense to inject utility window classes into other window classes? Example: I have classes such as: class MainWindow {…} class AddItemWindow {…} class AddAttachmentWindow {…} class BrowseItemsWindow {…} class QueryBuilderWindow {…} class QueryBrowserWindow {…} class PreferencesWindow {…} … Each of the utility classes can be opened from MainWindow. Some utility windows can also be opened from other utility windows. Generally, there might be a really complex graph of who can open whom. So each of those classes might need quite a lot of other window classes injected. I'm worried that such usage will go against the suggestion not to inject too many classes at once and become a code smell. Should I use some kind of a service locator object here?

    Read the article

  • Logging library for (c++) games

    - by Klaim
    I know a lot of logging libraries but didn't test a lot of them. (GoogleLog, Pantheios, the coming boost::log library...) In games, especially in remote multiplayer and multithreaded games, logging is vital to debugging, even if you remove all logs in the end. Let's say I'm making a PC game (not console) that needs logs (multiplayer and multithreaded and/or multiprocess) and I have good reasons for looking for a library for logging (like, I don't have time or I'm not confident in my ability to write one correctly for my case). Assuming that I need : performance ease of use (allow streaming or formating or something like that) reliable (don't leak or crash!) cross-platform (at least Windows, MacOSX, Linux/Ubuntu) Wich logging library would you recommand? Currently, I think that boost::log is the most flexible one (you can even log to remotely!), but have not good performance. Pantheios is often cited but I don't have comparison points on performance and usage. I've used my own lib for a long time but I know it don't manage multithreading so it's a big problem, even if it's fast enough. Google Log seems interesting, I just need to test it but if you already have compared those libs and more, your advice might be of good use. Games are often performance demanding while complex to debug so it would be good to know logging libraries that, in our specific case, have clear advantages.

    Read the article

  • Why does video playback lag/freeze when I go into full-screen mode?

    - by RanRag
    When I try to play my video files in SMPlayer it works fine but as soon as I switch to fullscreen mode(16:9) following thing happens: 1) Video starts lagging. 2) Audio and video goes out of sync. 3) CPU usage rises to ~50%. 4) SMPlayer starts to hang. My current SMPlayer configuration: 1)Video Output Driver = x11(slow) 2)Audio Output Driver = alsa(0.0-HDA Intel) 3)Cache = 8192 KB 4)Threads for decoding(MPEG-1/2 and H.264 only = 2 Things I tried solve this problem: 1) Tried changing video o/p driver to xv,gl. 2) Tried changing audio o/p driver to pulse. 3) Tried increasing cache size and also tried using nocache. Everything works fine on windows but I don't want to switch to windows just to play video files. My system config: Acer Aspire One D270 Atom N2600(Cedar Trail) 1.6GHz 2GB Memory Intel GMA 3600 graphics. Ubuntu 12.04 Kernel Release: 3.2.0-23-generic-pae Rest all things are working fine I have no resolution issue, bluetooth, wireless also working fine. Just ask me to submit any other log file I will be happy to post. SMPlayer log MPlayer Terminal output Codec Information(currently playing file):

    Read the article

  • What partition to use to keep data files in Ubuntu?

    - by Martin Lee
    I have been using Ubuntu for a few years and usually my partition set up was the following: Ext3 or Ext4 partition for the system itself (20 GB); A 10 GB swap partition; a big FAT32 partition to store movies, photos, work stuff, etc. (depends on the capacity of the disk, but usually it is what is left from Ext3+Swap, currently it is more than 200 GB). Does this setup sound right? I am considering to switching to one big Ext3 partition now, because the problem with Fat32 in Ubuntu has not gone anywhere: for example, right now I can access my 'big' partition with a 'Data' label only through /media/_themes?END. Pretty strange name for a partition, isn't it? some Linux software fail to read/write on this partition. For example, if I want to play around with rebar and build/make/compile things on this FAT32 partition, it will always complain about permissions and won't work (the same goes for many other kinds of software); it is not stable, I can not refer to some files on this FAT32 partition, because after the next reboot it will be called not '_themes?END', but something else. On the other side I usually begin to run out of space on the Ext3 partition after a few months of usage. So, the question is - what is the best setup of partitions for an Ubuntu system? Should a FAT32 partition be used at all?

    Read the article

  • Custom inventory items based on inheritance

    - by Bogdan Marginean
    So, here's the scenario: I'm building an RPG. Like most of the other RPGs on the market, my game will feature an inventory and of course, inventory items. So far I've worked well with using a single class for all items, because I did not need anything else than character stat alteration on item usage (consumption). However, I'd like some items to have a more exotic effect. Think of something like when the user consumes a transformation potion, he automatically turns into a beast. In order to achieve this I've thought about declaring a new class that inherits from BaseItem for each item. Each descendant would override some methods (like void OnConsume()), to change the base behavior. This works fine, but when it comes to inventory management, I have some issues. The actual inventory will have to work with BaseItem components only (for obvious reasons, as it's an enumerable collection of objects of the same type); casting any descendant to the base class is possible, so no problems in adding items to the inventory. But how can I keep track of the descendant's type (class) for each item in the inventory? And how to perform the descendant's OnConsume from withint he inventory, for each item? Let me know if you can think of a better solution than mine, or if you can think of a solution to my problem only. Development is done in C#, inside Unity 3.5. Thanks!

    Read the article

  • SOA Starting Point: Methods for Service Identification and Definition

    As more and more companies start to incorporate a Service Oriented Architectural design approach into their existing enterprise systems, it creates the need for a standardized integration technology. One common technology used by companies is an Enterprise Service Bus (ESB). An ESB, as defined by Progress Software, connects and mediates all communications and interactions between services. In essence an ESB is a form of middleware that allows services to communicate with one another regardless of framework, environment, or location. With the emergence of ESB, a new emphasis is now being placed on approaches that can be used to determine what Web services should be built. In addition, what order should these services be built? In May 2011, SOA Magazine published an article that identified 10 common methods for identifying and defining services. SOA’s Ten Common Methods for Service Identification and Definition: Business Process Decomposition Business Functions Business Entity Objects Ownership and Responsibility Goal-Driven Component-Based Existing Supply (Bottom-Up) Front-Office Application Usage Analysis Infrastructure Non-Functional Requirements  Each of these methods provides various pros and cons in regards to their use within the design process. I personally feel that during a design process, multiple methodologies should be used in order to accurately define a design for a system or enterprise system. Personally, I like to create a custom cocktail derived from combining these methodologies in order to ensure that my design fits with the project’s and business’s needs while still following development standards and guidelines. Of these ten methods, I am particularly fond of Business Process Decomposition, Business Functions, Goal-Driven, Component-Based, and routinely use them in my designs.  Works Cited Hubbers, J.-W., Ligthart, A., & Terlouw , L. (2007, 12 10). Ten Ways to Identify Services. Retrieved from SOA Magazine: http://www.soamag.com/I13/1207-1.php Progress.com. (2011, 10 30). ESB ARCHITECTURE AND LIFECYCLE DEFINITION. Retrieved from Progress.com: http://web.progress.com/en/esb-architecture-lifecycle-definition.html

    Read the article

  • Reproducible freezes with on an AMD fusion (e350) sony vaio

    - by doycho
    So a week ago I bought it and I've been struggling to make the Ubuntu which I installed stable. There's one thing that makes my life miserable, though. There's this easily reproducible freeze when I start some kind of video. So here is what happens: Everything works fine for some time I start vlc/mplayer/flashplayer/totem with something to watch In few minutes time I lose the sound (nothing in the logs at this point) At that time the video app instantly allocates all the memory and its CPU usage skyrockets. Total freeze. I can move the cursor around for few seconds and sometimes even switch to another app. But ultimately there comes the time I can't do anything - can't kill X with ctrl+alt+backspace (I have it enabled), can't switch to any other console (ctrl+alt+f1-6), can't connect to the machine via ssh. The only way to restart it is the ctrl+alt+SysRq+UABI magic :) What discourages me most is the fact I can't see anything in the logs. The only error I've noticed is Jun 19 17:00:37 serenity kernel: [ 1506.350676] software-center[17581]: segfault at 30 ip 00007fd3631b814c sp 00007fff18a6fa10 error 4 in libgtk-x11-2.0.so.0.2400.4[7fd362f7d000+436000]. I've been searching through the Xorg log, kernel logs, syslog. If you have any idea how I can get more debug info I'll be glad to try them. Things I've tried: Changing drivers - the open source one, the proprietary driver xorg-edgers' ppa - https://launchpad.net/~xorg-edgers/+archive/ppa changing to the last stable kernel (2.6.39) Some notes: It my be irrelevant but the sound is constantly stuttering. This probably is a separate issue though I've found that if I start more video/sound apps the freeze happens faster.

    Read the article

  • What are the hard and fast rules for Cache Control?

    - by Metalshark
    Confession: sites I maintain have different rules for Cache Control mostly based on the default configuration of the server followed up with recommendations from the Page Speed & Y-Slow Firefox plug-ins and the Network Resources view in Google's Speed Tracer. Cache-Control is set to private/public depending on what they say to do, ETag's/Last-Modified headers are only tinkered with if Y-Slow suggests there is something wrong and Vary-Accept-Encoding seems necessary when manually gziping files for Amazon CloudFront. When reading through the material on the different options and what they do there seems to be conflicting information, rules for broken proxies and cargo cult configurations. Any of the official information provided by the analysis tools mentioned above is quite inaccessible as it deals with each topic individually instead of as a unified strategy (so there is no cross-referencing of techniques). For example, it seems to make no sense that the speed analysis tools rate a site with ETag's the same as a site without them if they are meant to help with caching. What are the hard and fast rules for a platform agnostic Cache Control strategy? EDIT: A link through Jeff Atwood's article explains Caching in superb depth. For the record though here are the hard and fast rules: If the file is Compressed using GZIP, etc - use "cache-control: private" as a proxy may return the compressed version to a client that does not support it (the browser cache will hold files marked this way though). Also remember to include a "Vary: Accept-Encoding" to say that it is compressible. Use Last-Modified in conjunction with ETag - belt and braces usage provides both validators, whilst ETag is based on file contents instead of modification time alone, using both covers all bases. NOTE: AOL's PageTest has a carte blanche approach against ETags for some reason. If you are using Apache on more than one server to host the same content then remove the implicitly declared inode from ETags by excluding it from the FileETag directive (i.e. "FileETag MTime Size") unless you are genuinely using the same live filesystem. Use "cache-control: public" wherever you can - this means that proxy servers (and the browser cache) will return your content even if the rest of the page needs HTTP authentication, etc.

    Read the article

  • Oracle Secure Global Desktop - Business Continuity During Snowstorm!

    - by Mohan Prabhala
    Capgemini, one of the world's largest management consulting, outsourcing and professional services companies, is an Oracle Secure Global Desktop customer and uses it to provide secure, remote access to 1) corporate applications centralized in the datacenter and 2) desktops hosted on Oracle VDI. Earlier this month, one of Capgemini's government customers in Holland were advised to avoid traveling to work, due to a heavy snowstorm. This resulted in a lot of employees working from home. Thankfully due to their deployment of the Oracle Secure Global Desktop gateway, employees were able to easily access their corporate applications and desktops from home and anywhere outside of their office. Capgemini reports that during the days of the snowstorm, a record number of users leveraged Oracle Secure Global Desktop (servers and gateway). Despite this record usage, Oracle Secure Global Desktop remained perfectly stable and allowed users to seamlessly access their applications and desktops. This is a great example of how Oracle Secure Global Desktop allows employee productivity and business continuity even during severe weather conditions such as snowstorms. We are delighted to have enabled business continuity for Capgemini's customers, and look forward to our continued relationship with Capgemini. This blog has been approved for posting by Capgemini.

    Read the article

  • Efficiency concerning thread granularity

    - by MaelmDev
    Lately, I've been thinking of ways to use multithreading to improve the speed of different parts of a game engine. What confuses me is the appropriate granularity of threads, especially when dealing with single-instruction-multiple-data (SIMD) tasks. Let's use line-of-sight detection as an example. Each AI actor must be able to detect objects of interest around them and mark them. There are three basic ways to go about this with multithreading: Don't use threading at all. Create a thread for each actor. Create a thread for each actor-object combination. Option 1 is obviously going to be the least efficient method. However, choosing between the next two options is more difficult. Only using one thread per actor is still running through every object in series instead of in parallel. However, are CPU's able to create and join threads in the granularity posed in Option 3 efficiently? It seems like that many calls to the OS could be really slow, and varying enormously between different hardware.

    Read the article

  • GLSL billboard move center of rotation

    - by Jacob Kofoed
    I have successfully set up a billboard shader that works, it can take in a quad and rotate it so it always points toward the screen. I am using this vertex-shader: void main(){ vec4 tmpPos = (MVP * bufferMatrix * vec4(0.0, 0.0, 0.0, 1.0)) + (MV * vec4( vertexPosition.x * 1.0 * bufferMatrix[0][0], vertexPosition.y * 1.0 * bufferMatrix[1][1], vertexPosition.z * 1.0 * bufferMatrix[2][2], 0.0) ); UV = UVOffset + vertexUV * UVScale; gl_Position = tmpPos; BufferMatrix is the model-matrix, it is an attribute to support Instance-drawing. The problem is best explained through pictures: This is the start position of the camera: And this is the position, looking in from 45 degree to the right: Obviously, as each character is it's own quad, the shader rotates each one around their own center towards the camera. What I in fact want is for them to rotate around a shared center, how would I do this? What I have been trying to do this far is: mat4 translation = mat4(1.0); translation = glm::translate(translation, vec3(pos)*1.f * 2.f); translation = glm::scale(translation, vec3(scale, 1.f)); translation = glm::translate(translation, vec3(anchorPoint - pos) / vec3(scale, 1.f)); Where the translation is the bufferMatrix sent to the shader. What I am trying to do is offset the center, but this might not be possible with a single matrix..? I am interested in a solution that doesn't require CPU calculations each frame, but rather set it up once and then let the shader do the billboard rotation. I realize there's many different solutions, like merging all the quads together, but I would first like to know if the approach with offsetting the center is possible. If it all seems a bit confusing, it's because I'm a little confused myself.

    Read the article

  • Internet is far slower in Ubuntu than Windows 7 on dual-booted machine

    - by Tim
    Edit: I'll leave the original post as-is, but after further investigation, it appears that the problem is something to do with my wi-fi card. Speeds are normal when I connect via cable. Edit 2: Problem was solved. It was something to do with the wireless card drivers. I normally use Windows 7 on my laptop and have internet speeds that are normally about 15-20 Mb/s. I have recently dual-booted with Ubuntu 12.10, and have noticed that internet speeds are drastically slower in Ubuntu. When tested, speeds range from 0.2-2 Mb/s, although occasionally being significantly faster than that or even stopping completely for short periods of time. I've also noticed that when first booting into Ubuntu, speeds start fairly fast, and drop to incredibly slow with a few seconds to a few minutes. There's still some possibility that the issue may be with my ISP, as things seem slower than usual even in Windows, but I suspect that it is related to Ubuntu, as things are far slower in Ubuntu than in Windows. I'm wondering, what could be the cause of this? Potentially relevant information: -I've dual booted before on this machine with earlier versions of Ubuntu (different ISP at the time) with no problem. ISP: Rogers (Major Canadian ISP) System info (Gateway NV53a Laptop): Operating System MS Windows 7 Home Premium 64-bit CPU AMD Phenom II N970 Caspian 45nm Technology RAM 6.00 GB Dual-Channel DDR3 @ 664MHz (9-9-9-24) Motherboard Gateway SJV51_DN (Socket S1G4) Graphics Generic PnP Monitor (1366x768@60Hz) ATI Mobility Radeon HD 4250 (Acer Incorporated [ALI]) Hard Drives 733GB TOSHIBA TOSHIBA MK7559GSXP ATA Device (SATA) Networking info: Connected through Wi-Fi Atheros AR5B97 Wireless Network A

    Read the article

  • Using Coherence API to get POF bytes

    - by Bruno.Borges
    Someone raised the question on how to use the Coherence API to get the bytes of an object in POF (Portable Object Format) programatically. So I came up with this small code that shows the very cool API simple usage :-)   SimplePofContext spc = new SimplePofContext();    spc.registerUserType(0, User.class, new UserSerializer());    // consider UserSerializer as an implementation of PofSerializer            User u = new User();    u.setId(21);    u.setName("Some Name");    u.setEmail("[email protected]");            ByteArrayOutputStream baos = new ByteArrayOutputStream();    DataOutput dataOutput = new DataOutputStream(baos);    BufferOutput bufferOutput = new WrapperBufferOutput(dataOutput);    spc.serialize(bufferOutput, u);            byte[] byteArray = baos.toByteArray();    System.out.println(Arrays.toString(byteArray));  Easy, isn't?

    Read the article

  • Scene graphs and spatial partitioning structures: What do you really need?

    - by tapirath
    I've been fiddling with 2D games for awhile and I'm trying to go into 3D game development. I thought I should get my basics right first. From what I read scene graphs hold your game objects/entities and their relation to each other like 'a tire' would be the child of 'a vehicle'. It's mainly used for frustum/occlusion culling and minimizing the collision checks between the objects. Spatial partitioning structures on the other hand are used to divide a big game object (like the map) to smaller parts so that you can gain performance by only drawing the relevant polygons and again minimizing the collision checks to those polygons only. Also a spatial partitioning data structure can be used as a node in a scene graph. But... I've been reading about both subjects and I've seen a lot of "scene graphs are useless" and "BSP performance gain is irrelevant with modern hardware" kind of articles. Also some of the game engines I've checked like gameplay3d and jmonkeyengine are only using a scene graph (That also may be because they don't want to limit the developers). Whereas games like Quake and Half-Life only use spatial partitioning. I'm aware that the usage of these structures very much depend on the type of the game you're developing so for the sake of clarity let's assume the game is a FPS like Counter-Strike with some better outdoor environment capabilities (like a terrain). The obvious question is which one is needed and why (considering the modern hardware capabilities). Thank you.

    Read the article

  • How to stay productive? What time management software is available?

    - by andrewsomething
    So since I started using askubuntu.com I've spent entirely too much time here answering other people's questions. Now maybe someone could help me with that by answering this one. I'm looking for time management software for Ubuntu. There are a number of these programs floating around for Windows. RescueTime is one that is very popular. The key features that I'd like to see in a linux app that RescueTime has are: Automatically records what application you are using, including what websites you visit. Reports and graphs on your time usage. Notifications for when you have spent too much time on "distractions." While RescueTime doesn't officially support linux, there is an open source RescueTime Linux Uploader. Unfortunately, it seems to only support Firefox and Epiphany for website tracking. I'm a Chromium user. The other major drawback to RescueTime is that it is a web service. I'd much rather not upload detailed information about how I spend my time to some third party. Google already knows too much about me as it is. Project Hamster, a GNOME time management app, comes so close. Sadly, it does not automatically track what you are doing. If I had enough discipline to manually report to an applet what I was up to, I doubt I'd need this. (How cool would it be if they provided some Zeitgeist integration to handle that part?)

    Read the article

  • VMPlayer 9, Xubuntu 12.10, Rails Development - Freezing frequently

    - by douglasisshiny
    I have a new Vizio Ultrabook that came with Windows 7. I develop Rails applications, and it's a pain to do that in windows, so I setup a Xubuntu VM with 1GB ram and 2 CPU cores. I basically keep the VM open all the time and have enough memory not to worry. Sometimes I pause the VM. For the first few days, everything was fine. The fourth day, Xubuntu froze up while running a test (with Guard and RSpec). I didn't think much of it and restarted the VM and went on my way. The freezes started becoming more frequent, though. I don't think they are only when I run a test, but often they are. It'll happen quickly, too. Startup VM, save file, test runs, it freezes, all within 5 minutes. Of note: the VM is using a shared folder from Windows (where the code is). This may be the problem. Any other people experience something like this?

    Read the article

< Previous Page | 266 267 268 269 270 271 272 273 274 275 276 277  | Next Page >