Search Results

Search found 15563 results on 623 pages for 'django model'.

Page 382/623 | < Previous Page | 378 379 380 381 382 383 384 385 386 387 388 389  | Next Page >

  • Reasons to disable game save during combat (e.g. Mass Effect 2)

    - by Steve V.
    So I've been playing Mass Effect 2 (PC) and one of the things I've noticed is that you can only save your game when you're not engaged in combat. As soon as the first enemy shows up on your radar, the save button is disabled. Once combat is over, save functionality reappears. It seems reasonable to assume that Mass Effect 2 is a state machine, and therefore, the internal state of the program at any moment can be captured and reloaded later. This is basically a solved problem - games have been designed this way since the Half-Life era. It also seems reasonable to assume that BioWare knew what they were doing when they made the decision not to follow this model - it's a tried and true system; BioWare wouldn't have done it the way they did without some good reason. What reasons are there to disable game save functionality during combat?

    Read the article

  • Getting Started with Component Architecture: DI?

    - by ashes999
    I just moved away from MVC towards something more component-architecture-like. I have no concept of messages yet (it's rough prototype code), objects just get internal properties and values of other classes for now. That issue aside, it seems like this is turning into an aspect-oriented-programming challenge. I've noticed that all entities with, for example, a position component will have similar properties (get/set X/Y/Z, rotation, velocity). Is it a common practice, and/or good idea, to push these behind an interface and use dependency injection to inject a generic class (eg. PositionComponent) which already has all the boiler-plate code? (I'm sure the answer will affect the model I use for message/passing)

    Read the article

  • Miss Open World? View this Roadmap Presentation

    - by PeopleTools Strategy
    If you were unable to attend Oracle Open World in September, you missed out on some important PeopleSoft messages.  Don't despair!  You now have a chance to receive an update on PeopleSoft’s presence at Oracle OpenWorld 2013 and the key messages delivered there. You can view the “PeopleSoft Update and Roadmap” webcast found here on the Quest Users Group site.  (Note: this is available with a FREE subscriber account.  Anyone can sign up here at no cost. This webcast recording presents the significant adoption and momentum behind PeopleSoft 9.2.  Viewers will also learn about the new release model for continuously delivering new capabilities to PeopleSoft customers at a lower cost enabled by the new PeopleSoft Update Manager.  There are also compelling live demonstrations of the major investment areas for PeopleSoft including a new PeopleSoft user experience enabling mobile solutions as well as In-Memory PeopleSoft applications.

    Read the article

  • Using XA Transactions in Coherence-based Applications

    - by jpurdy
    While the costs of XA transactions are well known (e.g. increased data contention, higher latency, significant disk I/O for logging, availability challenges, etc.), in many cases they are the most attractive option for coordinating logical transactions across multiple resources. There are a few common approaches when integrating Coherence into applications via the use of an application server's transaction manager: Use of Coherence as a read-only cache, applying transactions to the underlying database (or any system of record) instead of the cache. Use of TransactionMap interface via the included resource adapter. Use of the new ACID transaction framework, introduced in Coherence 3.6.   Each of these may have significant drawbacks for certain workloads. Using Coherence as a read-only cache is the simplest option. In this approach, the application is responsible for managing both the database and the cache (either within the business logic or via application server hooks). This approach also tends to provide limited benefit for many workloads, particularly those workloads that either have queries (given the complexity of maintaining a fully cached data set in Coherence) or are not read-heavy (where the cost of managing the cache may outweigh the benefits of reading from it). All updates are made synchronously to the database, leaving it as both a source of latency as well as a potential bottleneck. This approach also prevents addressing "hot data" problems (when certain objects are updated by many concurrent transactions) since most database servers offer no facilities for explicitly controlling concurrent updates. Finally, this option tends to be a better fit for key-based access (rather than filter-based access such as queries) since this makes it easier to aggressively invalidate cache entries without worrying about when they will be reloaded. The advantage of this approach is that it allows strong data consistency as long as optimistic concurrency control is used to ensure that database updates are applied correctly regardless of whether the cache contains stale (or even dirty) data. Another benefit of this approach is that it avoids the limitations of Coherence's write-through caching implementation. TransactionMap is generally used when Coherence acts as system of record. TransactionMap is not generally compatible with write-through caching, so it will usually be either used to manage a standalone cache or when the cache is backed by a database via write-behind caching. TransactionMap has some restrictions that may limit its utility, the most significant being: The lock-based concurrency model is relatively inefficient and may introduce significant latency and contention. As an example, in a typical configuration, a transaction that updates 20 cache entries will require roughly 40ms just for lock management (assuming all locks are granted immediately, and excluding validation and writing which will require a similar amount of time). This may be partially mitigated by denormalizing (e.g. combining a parent object and its set of child objects into a single cache entry), at the cost of increasing false contention (e.g. transactions will conflict even when updating different child objects). If the client (application server JVM) fails during the commit phase, locks will be released immediately, and the transaction may be partially committed. In practice, this is usually not as bad as it may sound since the commit phase is usually very short (all locks having been previously acquired). Note that this vulnerability does not exist when a single NamedCache is used and all updates are confined to a single partition (generally implying the use of partition affinity). The unconventional TransactionMap API is cumbersome but manageable. Only a few methods are transactional, primarily get(), put() and remove(). The ACID transactions framework (accessed via the Connection class) provides atomicity guarantees by implementing the NamedCache interface, maintaining its own cache data and transaction logs inside a set of private partitioned caches. This feature may be used as either a local transactional resource or as logging XA resource. However, a lack of database integration precludes the use of this functionality for most applications. A side effect of this is that this feature has not seen significant adoption, meaning that any use of this is subject to the usual headaches associated with being an early adopter (greater chance of bugs and greater risk of hitting an unoptimized code path). As a result, for the moment, we generally recommend against using this feature. In summary, it is possible to use Coherence in XA-oriented applications, and several customers are doing this successfully, but it is not a core usage model for the product, so care should be taken before committing to this path. For most applications, the most robust solution is normally to use Coherence as a read-only cache of the underlying data resources, even if this prevents taking advantage of certain product features.

    Read the article

  • Advice on triple/quadruple-booting?

    - by professorfish
    I am currently running Windows 7 Home Premium x64 on my laptop. I would like to install more than one Linux distro, IN ADDITION TO Windows 7. How do I go about this, what do I need to be careful and aware of, is it possible? The specific distros I might eventually install: Definitely: Ubuntu (is it a good idea to install the Linux-Secure-Remix version?) Almost definitely: OpenSUSE Probably: Zorin Possibly: Arch Possibly: Fedora Possibly: FreeBSD Computer details: Successfully used WUBI for Ubuntu in the past Recently reinstalled Windows using the RECOVERY partition Windows 7 Home Premium x64 model: ASUS K53U series AMD Brazos Dual Core E450 1.65 GHz 750GB hard drive, currently partitioned into C: (300GB total, 246 GB free), D: (373GB - total, 167 GB free), and RECOVERY (the rest of the space, I think) 4GB RAM Can I be sure that GRUB will work, if WUBI has worked? In short, how do I go about triple- or quadruple-booting Windows 7, Ubuntu and other distros? What do I need to be aware of? How do I set up the partition structure? Thank you in advance

    Read the article

  • In the world of .Net, managed code and the web is there still a place for VBA?

    - by MrTelly
    Microsoft has moved away from the COM stack, VB6 is so last century and .Net rules the (MS) roost. Yet I find myself still banging out reams of VBA code - for a new project automating Excel seeing as you ask. I've tried to doing the same kind of thing using VSTO and it was just too damn buggy/hard/inefficient with a broken development model. I can't get rid of the feeling that I'm missing something, OTOH I really can't see a better way of solving this problem. What are your thoughts?

    Read the article

  • Distributed cache and improvement

    - by philipl
    Have this question from interview: Web Service function given x static HashMap map (singleton created) if (!map.containsKey(x)) { perform some function to retrieve result y map.put(x, y); } return y; The interviewer asked general question such as what is wrong with this distributed cache implementation. Then asked how to improve on it, due to distributed servers will have different cached key pairs in the map. There are simple mistakes to be pointed out about synchronization and key object, but what really startled me was that this guy thinks that moving to database implementation solves the problem that different servers will have different map content, i.e., the situation when value x is not on server A but on server B, therefore redundant data has to be retrieved in server A. Does his thinking make any sense? (As I understand this is the basic cons for distributed cache against database model, seems he does not understand it at all) What is the typical solution for the cache growth issue (weak reference?) and sync issue (do not know which server has the key already cached - use load balancing)? Thanks

    Read the article

  • Miss Oracle Open World? View the PeopleSoft Roadmap Presentation Here

    - by John Webb
    If you were unable to attend Oracle Open World in September, you missed out on some important PeopleSoft messages.   Don't despair!  You now have a chance to receive an update on PeopleSoft's presence at Oracle OpenWorld 2013 and the key messages delivered there. You can view the “PeopleSoft Update and Roadmap” webcast found here on the Quest Users Group site.  (Note: this is available with a FREE subscriber account.  Anyone can sign up here at no cost. This webcast recording presents the significant adoption and momentum behind PeopleSoft 9.2.  Viewers will also learn about the new release model for continuously delivering new capabilities to PeopleSoft customers at a lower cost enabled by the new PeopleSoft Update Manager.  There are also compelling live demonstrations of the major investment areas for PeopleSoft including a new PeopleSoft user experience enabling mobile solutions as well as In-Memory PeopleSoft applications. You can view all presentations ns in the Oracle Open World 2013 Content Catalog.

    Read the article

  • What Design Pattern is seperating transform converters

    - by RevMoon
    For converting a Java object model into XML I am using the following design: For different types of objects (e.g. primitive types, collections, null, etc.) I define each its own converter, which acts appropriate with respect to the given type. This way it can easily extended without adding code to a huge if-else-then construct. The converters are chosen by a method which tests whether the object is convertable at all and by using a priority ordering. The priority ordering is important so let's say a List is not converted by the POJO converter, even though it is convertable as such it would be more appropriate to use the collection converter. What design pattern is that? I can only think of a similarity to the command pattern.

    Read the article

  • Validation and Verification explanation (Boehm) - I cannot understand its point

    - by user970696
    Hopefully my last thread about V&V as I found the B.Boehm is text which I just do not understand well (likely my technical English is not that good). http://csse.usc.edu/csse/TECHRPTS/1979/usccse79-501/usccse79-501.pdf Basically he says that verification is about checking that products derived from requirements baseline must correspond to it and that deviation leads only to changes in these derived products (design, code). But he says it begins with design and ends with acceptance tests (you can check the V model inside). The thing is, I have accepted ISO12207 in terms of all testing is validation, yet it does not make any sense here. In order to be sure the product complies with requirements (acceptance test) I need to test it. Also it says that validation problems means that requirements are bad and needs to be changed - which does not happen with testing that testers do, who just checks correspondence with requirements.

    Read the article

  • Is there any guarantee about the graphical output of different GPUs in DirectX?

    - by cloudraven
    Let's say that I run the same game in two different computers with different GPUs. If for example they are both certified for DirectX 10. Is there a guarantee that the output for a given program (game) is going to be the same regardless the manufacturer or model of the GPU? I am assuming the configurable settings are exactly the same in both cases. I heard that it is not the case for DirectX 9 and older, but that it is true for DirectX 10. If someone could provide a source confirming or denying it, it would be great. Also what is the guarantee offered. Will the output be exactly the same or just perceptually the same to the human eye?

    Read the article

  • Which graphics library should I be using?

    - by DaveDev
    I have been developing and maintaining a WPF application, for which I've recently been tasked with adding a 3D representation of some of the data. I'm new to graphics programming in every kind of way so I'm curious whether I should stick with 3D graphics capabilities built into WPF or should I investigate other solutions, like OpenTK or SharpGL My objective is to represent the data so that it will eventually appear similar to: with nodes connected by lines. I need to rotate the image around each axis and each node will be a 3D model of the device it represents. So far, I've been able to experiment with the tutorial outlined here: Windows Presentation Foundation (WPF) 3D Tutorial and it was helpful as an introduction. But I can see that there are other ways to implement 3D graphics solutions and I wonder if they are more suitable for my needs, or should I stick with the in-built WPF solution? What are the pros and cons of each?

    Read the article

  • How to install Ubuntu using a USB stick?

    - by J. N.
    I cannot install Ubuntu 11.10 from a USB stick. It doesn't boot to Ubuntu page for installation but remains on Windows 7. I've downloaded the 11.10 version iso file and burnt it to a USB stick. After I inserted it, the USB stick is a symbol of Ubuntu installation and I clicked wubi to try to install it. But it didn't boot to Ubuntu but stayed on Win 7 after restart. It occurred error and said "windowsBackend" object has no attribut "cd_path" when I chose "Help me to boot from CD". I thought it's the problem of my computer model (acer travelmate 8481), but it can't boot on an old computer using XP as well. How can I solve this problem and install Ubuntu to replace Windows?

    Read the article

  • Installed Ubuntu using WUBI due to lack of CD Drive

    - by Chantelle
    I have installed 12.04 via WUBI because my computer does not have a CD Drive. Is there any way that I can delete Windows 7 from my computer and use the whole HD for Ubuntu? I ask this question because for one reason or another I cannot boot from a boot-able USB stick (either in Windows or Ubuntu, however the USB port works because I am able to use my cell phone's tethering plan using all the USB ports to connect to the Internet). Edit: Just found out the problem. Eee PC Model 1018p's do not boot from any format other than FAT and FAT16 and not FAT32.

    Read the article

  • Assigning a colour to imorted obj. files that are being used as default material

    - by Salino
    I am having a problem with assigning a colour to the different meshes that I have on one object. The technique that I have used is the first approach on this site. Is it possible to export a simulation (animation) from Blender to Unity? So what I would like to do is the following. I have about 107 meshes that are different frames from my shape key animation of my blender model. What I would like to have is that the first mesh will be bright green and up to the 40th mesh the colour turns to be white /greyish... the best would be if I could assign every mesh by hand a colour, however they are all default materials. And if I assign the object a colour, the whole "animation" is going to be in that colour

    Read the article

  • WiFi USB adapter showing the Network ..... but no connection in effect

    - by Idrees
    I have Pentium 4 system 3 GHz, 1 GB RAM ..... (no built-in WiFi) I installed Ubuntu 12.10 on my PC, works fine. It picked all the drivers for audio, video itself. I plugged TP-Link 54Mbps High Gain Wireless USB Adapter (TL-WN422G) ..... (link for the device: http://www.tp-link.com/en/products/details/?model=TL-WN422G) Now what happens is that the WiFi network is detected and shown in the "Network Connections", and it is also connected to it but when I open Firefox it is as if there no internet connection at all.

    Read the article

  • How do you ensure consistent experience across multiple graphics cards (or even driver versions)?

    - by Grigory Javadyan
    So I was writing a simple 2D game with OpenGL and SDL and had this problem when there was awful tearing when running in windowed mode (even though I explicitly asked SDL_SetVideoMode to use double buffering). Didn't worry about it all too much because most of the time the game grabs the entire screen, windowed mode is just for debugging. Anyway, yesterday I updated my nVidia drivers and tearing disappeared, the game runs smooth and looks nice in windowed mode too. I can see how the problem may be in the graphics driver, but this leads to a question. Obviously, professional game developers have to deal with a lot of different hardware/software configurations. What are the techniques they use to make sure the game looks the roughly the same on different graphics cards or even the same model of graphics card, but with different driver versions?

    Read the article

  • Google Analytics w3wp.exe?

    - by s15199d
    In this link Google defines a Visit. The key part that interests me now, is this: "If a user is inactive on your site for 30 minutes or more, any future activity will be attributed to a new session." Would an idle user (e.g. an employee whose PC is left on over the weekend) record "activity" as a result of the w3wp.exe process recycling? Our site caching model refreshes every 30 minutes. Could this trigger "activity" for an idle user? I've asked this on the Google Analytics forum a week ago and no response.

    Read the article

  • How should I implement the repository pattern for complex object models?

    - by Eric Falsken
    Our data model has almost 200 classes that can be separated out into about a dozen functional areas. It would have been nice to use domains, but the separation isn't that clean and we can't change it. We're redesigning our DAL to use Entity Framework and most of the recommendations that I've seen suggest using a Repository pattern. However, none of the samples really deal with complex object models. Some implementations that I've found suggest the use of a repository-per-entity. This seems ridiculous and un-maintainable for large, complex models. Is it really necessary to create a UnitOfWork for each operation, and a Repository for each entity? I could end up with thousands of classes. I know this is unreasonable, but I've found very little guidance implementing Repository, Unit Of Work, and Entity Framework over complex models and realistic business applications.

    Read the article

  • Benefits of TOGAF or similar?

    - by Lunatik
    I can read the website blurb and be impressed by the alleged benefits, but I haven't worked anywhere or with anyone who followed the TOGAF (or any alternative) architecture framework. Our organisation has declared itself dedicated to moving from what is currently a fairly shambolic design & development model towards something approaching a modern structured process. Things like TOGAF have been mentioned as helping achieve a world-class enterprise development environment (!) but I'm convinced that no-one here really understands the real-world benefits that wholesale adoption might bring and, perhaps more importantly, the effort/pain required to achieve the same. Do you have experience in using TOGAF or similar to wrestle control in an organisation? Do you think that use of the framework brought any benefit? Edit: For clarification TOGAF is "The Open Group Architecture Framework", a detailed method and set of tools for developing an enterprise architecture. See: http://www.opengroup.org/architecture/togaf8-doc/arch/

    Read the article

  • Optimize bootup sequence

    - by ubuntudroid
    I'm on Ubuntu 11.04 (upgraded from 10.10) and suffering really high bootup times. It got so annoying, that I decided to dive into bootchart analysis myself. Therefore I installed bootchart and restarted the system which generated this chart. However, I'm not really experienced in reading such stuff. What causes the long bootup sequence? Edit: Here is the output of hdparm -i /dev/sda: /dev/sda: Model=SAMSUNG HD501LJ, FwRev=CR100-12, SerialNo=S0MUJ1EQ102621 Config={ Fixed } RawCHS=16383/16/63, TrkSize=34902, SectSize=554, ECCbytes=4 BuffType=DualPortCache, BuffSize=16384kB, MaxMultSect=16, MultSect=16 CurCHS=16383/16/63, CurSects=16514064, LBA=yes, LBAsects=976773168 IORDY=on/off, tPIO={min:120,w/IORDY:120}, tDMA={min:120,rec:120} PIO modes: pio0 pio1 pio2 pio3 pio4 DMA modes: mdma0 mdma1 mdma2 UDMA modes: udma0 udma1 udma2 udma3 udma4 udma5 *udma6 AdvancedPM=no WriteCache=enabled Drive conforms to: unknown: ATA/ATAPI-3,4,5,6,7 * signifies the current active mode And here the output of hdparm -tT /dev/sda /dev/sda: Timing cached reads: 2410 MB in 2.00 seconds = 1205.26 MB/sec Timing buffered disk reads: 258 MB in 3.02 seconds = 85.50 MB/sec

    Read the article

  • Technical Article: Experimenting with Java Timers

    - by Tori Wieldt
    OTN's new tech article is "Experimenting with Java Timers" by T. Lamine Ba. This article studies time—how Java handles timers and the scheduling of tasks. Java timers are utilities that let you execute threads or tasks at a predetermined future time, and these tasks can be repeated according to a set frequency. The article starts with a simple "Hello World" program in a web application that's composed of JavaServer Pages (JSP) and uses the model-control-view (MVC) design pattern. The IDE used in this article is NetBeans IDE 7.1, but you can use any IDE that supports Java. "Experimenting with Java Timers" demonstrates how to get started scheduling jobs with Java. To learn about Swing timers, check out the Java tutorial "How to Use Swing Timers" and additional information in the Java Platform, Standard Edition 7 API Specification for Class Timer. 

    Read the article

  • 12.04 freezes on install

    - by CHris
    I'm new to ubuntu and can't for the life of me understand how to get around this issue. I'm having a problem where when I insert the boot disk that i burned for 12.04, it loads and gives me the language option and select an installation type, i select either try ubuntu / install ubuntu and it begins to load. Then after about 5 mintues of the purple screen with the dots under the UBUNTU it freezes the whole computer, and the only way to get out is to turn off the laptop with the power button. I have read previous threads saying people had similar "freezing" issues including ones that mention my model of laptop, HP Pavillion DV1000. But it appears their problems are post installation and relate to their wireless cards. I have tried changing the boot options (F6) and I am totally stuck. Can anyone shed any light on to this issue? Thanks in advance.........

    Read the article

  • Get/Post Controller Logic Best Practice

    - by Brian Mains
    In an ASP.NET MVC project (Razor), I have a Get request, which loads two properties on a model, dependent on the property passed into the action method. So if the parameter has a value, the Group property is supplied data. But if not, the Groups collection property is supplied data. In the post action method, when I process the data, to repopulate the view, I have to provide similar logic, and could getaway with returning Action(param) (the get response) to the caller. My question is, based on experience, is that a good practice to get into? I see some downsides to doing that, but adds the lack of code redundancy. Or is there a better alternative?

    Read the article

  • Why does this exported cube have too many vertices?

    - by Joewsh
    I'm trying to export md5mesh models. Just as a test I decided to export a simple cube (i.e. with 8 vertices). When I opened the .md5mesh file it lists the following: numverts 24 numtris 12 numweights 24 Obviously the number of triangles makes sense: 6 faces * 2 to triangulate = 12. The model only has one bone so again it even makes sense that there is one weight for each vertex. The question is though, why is the file listing 24 vertices? Is the problem the exporter or is this normal for md5mesh's? Is it something that you have to rectify when you come to parsing the file in engine? I don't want to be parsing or drawing duplicated vertices without reason. I'm guessing it's something to do with shading and normals. Is it a case of listing each vert 3 times, one for each facing normal?

    Read the article

< Previous Page | 378 379 380 381 382 383 384 385 386 387 388 389  | Next Page >