Search Results

Search found 24560 results on 983 pages for 'memory model'.

Page 119/983 | < Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >

  • Why aren't there 8gb RAM modules yet?

    - by user49951
    Why is RAM module development seemingly stuck at the same size for a while now (a couple of years)? I bought 2x2gb modules 2 years ago, and now it's all the same size, with prices even bigger. I want more memory, because I work a lot on my computer and I just need it. What is going on? Hardware/memory progress was being made constantly until these couple of years, and I'm a big computer user for over 15 years. Why isn't here 4gb/8gb modules yet? I would gladly replace my DDR2 motherboard for a DDRX one if it had at least 4gb DDRX modules for a reasonable price. Now we have a situation with very cheap usb drives reaching 64gb size, and a ram modules with pathetic 2gb size. Sounds like some sort of conspiracy.

    Read the article

  • How much ram to be able to convert large (5-6MB) jpegs? [closed]

    - by cosmicbdog
    I've got a project where we want to be processing large jpegs (5-6MB) with apache and php (using GD library). My understanding is that the server converts the image into a BMP making it quite ram heavy and currently we're unable to do it with our 1gb of memory. Here's the error we get: Fatal error: Allowed memory size of 67108864 bytes exhausted (tried to allocate 17408 bytes) How much ram should we be looking at running with to process images of this size? Edit: As Chris S the purist highlighted below, my post is apparently vague. I am doing the most basic and common manipulation of an image, say turning it from a 4352px x 3264px jpg of 5mb in size, to a 900px x 675px file.

    Read the article

  • Find the model of my motherboard without opening the computer [closed]

    - by Code
    Possible Duplicate: Find out what the motherboard on my computer is I need to find the model of my motherboard so I can find what soundcard/chip it uses so I can get some drivers for it. Is there anyway to get this from inside XP? I looked through device manager but haven't seen anything that would tell me. I built the system over a year ago and don't have any receipts to check what it was.

    Read the article

  • looking for a model number recommendation for a network setup of 49 switches [closed]

    - by Bahrain Admin
    im looking to setup a site with 49 edge switches connected by fiber to a central switch. 3 VLANs will be setup to handle data, telephony, and streaming media. each edge switch should have provision for 2 SFP modules for failover, and the core switch needs to have the provision to handle this failover. i'm getting lost on the Cisco site with their specs and recommendations. if anyone could suggest a suitable model number for the core switch and the edge switch, it would be really appreciated.

    Read the article

  • SQL SERVER – Guest Post – Jonathan Kehayias – Wait Type – Day 16 of 28

    - by pinaldave
    Jonathan Kehayias (Blog | Twitter) is a MCITP Database Administrator and Developer, who got started in SQL Server in 2004 as a database developer and report writer in the natural gas industry. After spending two and a half years working in TSQL, in late 2006, he transitioned to the role of SQL Database Administrator. His primary passion is performance tuning, where he frequently rewrites queries for better performance and performs in depth analysis of index implementation and usage. Jonathan blogs regularly on SQLBlog, and was a coauthor of Professional SQL Server 2008 Internals and Troubleshooting. On a personal note, I think Jonathan is extremely positive person. In every conversation with him I have found that he is always eager to help and encourage. Every time he finds something needs to be approved, he has contacted me without hesitation and guided me to improve, change and learn. During all the time, he has not lost his focus to help larger community. I am honored that he has accepted to provide his views on complex subject of Wait Types and Queues. Currently I am reading his series on Extended Events. Here is the guest blog post by Jonathan: SQL Server troubleshooting is all about correlating related pieces of information together to indentify where exactly the root cause of a problem lies. In my daily work as a DBA, I generally get phone calls like, “So and so application is slow, what’s wrong with the SQL Server.” One of the funny things about the letters DBA is that they go so well with Default Blame Acceptor, and I really wish that I knew exactly who the first person was that pointed that out to me, because it really fits at times. A lot of times when I get this call, the problem isn’t related to SQL Server at all, but every now and then in my initial quick checks, something pops up that makes me start looking at things further. The SQL Server is slow, we see a number of tasks waiting on ASYNC_IO_COMPLETION, IO_COMPLETION, or PAGEIOLATCH_* waits in sys.dm_exec_requests and sys.dm_exec_waiting_tasks. These are also some of the highest wait types in sys.dm_os_wait_stats for the server, so it would appear that we have a disk I/O bottleneck on the machine. A quick check of sys.dm_io_virtual_file_stats() and tempdb shows a high write stall rate, while our user databases show high read stall rates on the data files. A quick check of some performance counters and Page Life Expectancy on the server is bouncing up and down in the 50-150 range, the Free Page counter consistently hits zero, and the Free List Stalls/sec counter keeps jumping over 10, but Buffer Cache Hit Ratio is 98-99%. Where exactly is the problem? In this case, which happens to be based on a real scenario I faced a few years back, the problem may not be a disk bottleneck at all; it may very well be a memory pressure issue on the server. A quick check of the system spec’s and it is a dual duo core server with 8GB RAM running SQL Server 2005 SP1 x64 on Windows Server 2003 R2 x64. Max Server memory is configured at 6GB and we think that this should be enough to handle the workload; or is it? This is a unique scenario because there are a couple of things happening inside of this system, and they all relate to what the root cause of the performance problem is on the system. If we were to query sys.dm_exec_query_stats for the TOP 10 queries, by max_physical_reads, max_logical_reads, and max_worker_time, we may be able to find some queries that were using excessive I/O and possibly CPU against the system in their worst single execution. We can also CROSS APPLY to sys.dm_exec_sql_text() and see the statement text, and also CROSS APPLY sys.dm_exec_query_plan() to get the execution plan stored in cache. Ok, quick check, the plans are pretty big, I see some large index seeks, that estimate 2.8GB of data movement between operators, but everything looks like it is optimized the best it can be. Nothing really stands out in the code, and the indexing looks correct, and I should have enough memory to handle this in cache, so it must be a disk I/O problem right? Not exactly! If we were to look at how much memory the plan cache is taking by querying sys.dm_os_memory_clerks for the CACHESTORE_SQLCP and CACHESTORE_OBJCP clerks we might be surprised at what we find. In SQL Server 2005 RTM and SP1, the plan cache was allowed to take up to 75% of the memory under 8GB. I’ll give you a second to go back and read that again. Yes, you read it correctly, it says 75% of the memory under 8GB, but you don’t have to take my word for it, you can validate this by reading Changes in Caching Behavior between SQL Server 2000, SQL Server 2005 RTM and SQL Server 2005 SP2. In this scenario the application uses an entirely adhoc workload against SQL Server and this leads to plan cache bloat, and up to 4.5GB of our 6GB of memory for SQL can be consumed by the plan cache in SQL Server 2005 SP1. This in turn reduces the size of the buffer cache to just 1.5GB, causing our 2.8GB of data movement in this expensive plan to cause complete flushing of the buffer cache, not just once initially, but then another time during the queries execution, resulting in excessive physical I/O from disk. Keep in mind that this is not the only query executing at the time this occurs. Remember the output of sys.dm_io_virtual_file_stats() showed high read stalls on the data files for our user databases versus higher write stalls for tempdb? The memory pressure is also forcing heavier use of tempdb to handle sorting and hashing in the environment as well. The real clue here is the Memory counters for the instance; Page Life Expectancy, Free List Pages, and Free List Stalls/sec. The fact that Page Life Expectancy is fluctuating between 50 and 150 constantly is a sign that the buffer cache is experiencing constant churn of data, once every minute to two and a half minutes. If you add to the Page Life Expectancy counter, the consistent bottoming out of Free List Pages along with Free List Stalls/sec consistently spiking over 10, and you have the perfect memory pressure scenario. All of sudden it may not be that our disk subsystem is the problem, but is instead an innocent bystander and victim. Side Note: The Page Life Expectancy counter dropping briefly and then returning to normal operating values intermittently is not necessarily a sign that the server is under memory pressure. The Books Online and a number of other references will tell you that this counter should remain on average above 300 which is the time in seconds a page will remain in cache before being flushed or aged out. This number, which equates to just five minutes, is incredibly low for modern systems and most published documents pre-date the predominance of 64 bit computing and easy availability to larger amounts of memory in SQL Servers. As food for thought, consider that my personal laptop has more memory in it than most SQL Servers did at the time those numbers were posted. I would argue that today, a system churning the buffer cache every five minutes is in need of some serious tuning or a hardware upgrade. Back to our problem and its investigation: There are two things really wrong with this server; first the plan cache is excessively consuming memory and bloated in size and we need to look at that and second we need to evaluate upgrading the memory to accommodate the workload being performed. In the case of the server I was working on there were a lot of single use plans found in sys.dm_exec_cached_plans (where usecounts=1). Single use plans waste space in the plan cache, especially when they are adhoc plans for statements that had concatenated filter criteria that is not likely to reoccur with any frequency.  SQL Server 2005 doesn’t natively have a way to evict a single plan from cache like SQL Server 2008 does, but MVP Kalen Delaney, showed a hack to evict a single plan by creating a plan guide for the statement and then dropping that plan guide in her blog post Geek City: Clearing a Single Plan from Cache. We could put that hack in place in a job to automate cleaning out all the single use plans periodically, minimizing the size of the plan cache, but a better solution would be to fix the application so that it uses proper parameterized calls to the database. You didn’t write the app, and you can’t change its design? Ok, well you could try to force parameterization to occur by creating and keeping plan guides in place, or we can try forcing parameterization at the database level by using ALTER DATABASE <dbname> SET PARAMETERIZATION FORCED and that might help. If neither of these help, we could periodically dump the plan cache for that database, as discussed as being a problem in Kalen’s blog post referenced above; not an ideal scenario. The other option is to increase the memory on the server to 16GB or 32GB, if the hardware allows it, which will increase the size of the plan cache as well as the buffer cache. In SQL Server 2005 SP1, on a system with 16GB of memory, if we set max server memory to 14GB the plan cache could use at most 9GB  [(8GB*.75)+(6GB*.5)=(6+3)=9GB], leaving 5GB for the buffer cache.  If we went to 32GB of memory and set max server memory to 28GB, the plan cache could use at most 16GB [(8*.75)+(20*.5)=(6+10)=16GB], leaving 12GB for the buffer cache. Thankfully we have SQL Server 2005 Service Pack 2, 3, and 4 these days which include the changes in plan cache sizing discussed in the Changes to Caching Behavior between SQL Server 2000, SQL Server 2005 RTM and SQL Server 2005 SP2 blog post. In real life, when I was troubleshooting this problem, I spent a week trying to chase down the cause of the disk I/O bottleneck with our Server Admin and SAN Admin, and there wasn’t much that could be done immediately there, so I finally asked if we could increase the memory on the server to 16GB, which did fix the problem. It wasn’t until I had this same problem occur on another system that I actually figured out how to really troubleshoot this down to the root cause.  I couldn’t believe the size of the plan cache on the server with 16GB of memory when I actually learned about this and went back to look at it. SQL Server is constantly telling a story to anyone that will listen. As the DBA, you have to sit back and listen to all that it’s telling you and then evaluate the big picture and how all the data you can gather from SQL about performance relate to each other. One of the greatest tools out there is actually a free in the form of Diagnostic Scripts for SQL Server 2005 and 2008, created by MVP Glenn Alan Berry. Glenn’s scripts collect a majority of the information that SQL has to offer for rapid troubleshooting of problems, and he includes a lot of notes about what the outputs of each individual query might be telling you. When I read Pinal’s blog post SQL SERVER – ASYNC_IO_COMPLETION – Wait Type – Day 11 of 28, I noticed that he referenced Checking Memory Related Performance Counters in his post, but there was no real explanation about why checking memory counters is so important when looking at an I/O related wait type. I thought I’d chat with him briefly on Google Talk/Twitter DM and point this out, and offer a couple of other points I noted, so that he could add the information to his blog post if he found it useful.  Instead he asked that I write a guest blog for this. I am honored to be a guest blogger, and to be able to share this kind of information with the community. The information contained in this blog post is a glimpse at how I do troubleshooting almost every day of the week in my own environment. SQL Server provides us with a lot of information about how it is running, and where it may be having problems, it is up to us to play detective and find out how all that information comes together to tell us what’s really the problem. This blog post is written by Jonathan Kehayias (Blog | Twitter). Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: MVP, Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Force Blank TextBox with ASP.Net MVC Html.TextBox

    - by Doug Lampe
    I recently ran into a problem with the following scenario: I have data with a parent/child data with a one-to-many relationship from the parent to the child. I want to be able to update parent and existing child data AND add a new child record all in a single post. I don't want to create a model just to store the new values. One of the things I LOVE about MVC is how flexible it is in dealing with posted data.  If you have data that isn't in your model, you can simply use the non-strongly-typed HTML helper extensions and pass the data into your actions as parameters or use the FormCollection.  I thought this would give me the solution I was looking for.  I simply used Html.TextBox("NewChildKey") and Html.TextBox("NewChildValue") and added parameters to my action to take the new values.  So here is what my action looked like: [HttpPost] public ActionResult EditParent(int? id, string newChildKey, string newChildValue, FormCollection forms) {     Model model = ModelDataHelper.GetModel(id ?? 0);     if (model != null)     {         if (TryUpdateModel(model))         {             if (ModelState.IsValid)             {                 model = ModelDataHelper.UpdateModel(model);             }             string[] keys = forms.GetValues("ChildKey");             string[] values = forms.GetValues("ChildValue");             ModelDataHelper.UpdateChildData(id ?? 0, keys, values);             ModelDataHelper.AddChildData(id ?? 0, newChildKey, newChildValue);             model = ModelDataHelper.GetModel(id ?? 0);         }        return View(report);     }    return new EmptyResult(); } The only problem with this is that MVC is TOO smart.  Even though I am not using a model to store the new child values, MVC still passes the values back to the text boxes via the model state.  The fix for this is simple but not necessarily obvious, simply remove the data from the model state before returning the view: ModelState.Remove("NewChildKey"); ModelState.Remove("NewChildValue"); Two lines of code to save a lot of headaches.

    Read the article

  • New SQLOS features in SQL Server 2012

    - by SQLOS Team
    Here's a quick summary of SQLOS feature enhancements going into SQL Server 2012. Most of these are already in the CTP3 pre-release, except for the Resource Governor enhancements which will be in the release candidate. We've blogged about a couple of these items before. I plan to add detail. Let me know which ones you'd like to see more on: - Memory Manager Redesign: Predictable sizing and governing SQL memory consumption: sp_configure ‘max server memory’ now limits all memory committed by SQL ServerResource Governor governs all SQL memory consumption (other than special cases like buffer pool) Improved scalability of complex queries and operations that make >8K allocations Improved CPU and NUMA locality for memory accesses Single memory manager that handles page allocations of all sizes Consistent Out-of-memory handling & management across different internal components - Optimized Memory Broker for Column Store indexes (Project Apollo) - Resource Governor Support larger scale multi-tenancy by increasing Max. number of resource pools20 -> 64 [for 64-bit] Enable predictable chargeback and isolation by adding a hard cap on CPU usage Enable vertical isolation of machine resources Resource pools can be affinitized to individual or groups of schedulers or to NUMA nodes New DMV for resource pool affinity  - CLR 4 support, adds .NET Framework 4 advantages - sp_server_dianostics Captures diagnostic data and health information about SQL Server to detect potential failures Analyze internal system state Reliable when nothing else is working   - New SQLOS DMVs (in 2008 R2SP1) SQL Server related configuration - New DMVsys.dm_server_services OS related resource configurationNew DMVssys.dm_os_volume_statssys.dm_os_windows_infosys.dm_server_registry XEvents for SQL and OS related Perfmon counters Extend sys.dm_os_sys_info See previous blog posts here and here. - Scale / Mission critical Increased scalability: Support Windows 8 max memory and logical processorsDynamic Memory support in Standard Edition - Hot-Add Memory enabled when virtualized - Various Tier1 Performance Improvements, including reduced instructions for superlatches. Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • Generate POCO classes in different project to the project with Entity Framework model

    - by Max
    I'm trying to use the Repository Pattern with EF4 using VS2010. To this end I am using POCO code generation by right clicking on the entity model designer and clicking Add code generation item. I then select the POCO template and get my classes. What I would like to be able to do is have my solution structured into separate projects for Entity (POCO) classes and another project for the entity model and repository code. This means that my MVC project could use the POCO classes for strongly typed views etc and not have to know about the repository or have to have a reference to it. To plug it all together I will have another separate project with interfaces and use IoC. Sounds good in my head I just don't know how to generate the classes into their own project! I can copy them and then change the namespaces on them but I wanted to avoid manual work whenever I change the schema in the db and want to update my model. Thanks

    Read the article

  • django-admin: creating,saving and relating a m2m model

    - by pastylegs
    I have two models: class Production(models.Model): gallery = models.ManyToManyField(Gallery) class Gallery(models.Model): name = models.CharField() I have the m2m relationship in my productions admin, but I want that functionality that when I create a new Production, a default gallery is created and the relationship is registered between the two. So far I can create the default gallery by overwriting the productions save: def save(self, force_insert=False, force_update=False): if not ( Gallery.objects.filter(name__exact="foo").exists() ): g = Gallery(name="foo") g.save() self.gallery.add(g) This creates and saves the model instance (if it doesn't already exist), but I don't know how to register the relationship between the two?

    Read the article

  • Using NHibernate with an EAV data model

    - by devonlazarus
    I'm trying to leverage NH to map to a data model that is a loose interpretation of the EAV/CR data model. I have most of it working but am struggling with mapping the Entity.Attributes collection. Here are the tables in question: -------------------- | Entities | -------------------- | EntityId PK |-| | EntityType | | -------------------- | ------------- | V -------------------- | EntityAttributes | ------------------ --------------------------- -------------------- | Attributes | | StringAttributes | | EntityId PK,FK | ------------------ --------------------------- | AttributeId FK | -> | AttributeId PK | -> | StringAttributeId PK,FK | | AttributeValue | | AttributeType | | AttributeName | -------------------- ------------------ --------------------------- The AttributeValue column is implemented as an sql_variant column and I've implemented an NHibernate.UserTypes.IUserType for it. I can create an EntityAttribute entity and persist it directly so that part of the hierarchy is working. I'm just not sure how to map the EntityAttributes collection to the Entity entity. Note the EntityAttributes table could (and does) contain multiple rows for a given EntityId/AttributeId combination: EntityId AttributeId AttributeValue -------- ----------- -------------- 1 1 Blue 1 1 Green StringAttributes row looks like this for this example: StringAttributeId AttributeName ----------------- -------------- 1 FavoriteColor How can I effectively map this data model to my Entity domain such that Entity.Attributes("FavoriteColors") returns a collection of favorite colors? Typed as System.String?

    Read the article

  • Synchronize model in MySQL Workbench

    - by Álvaro G. Vicario
    After reading the documentation for MySQL Workbench I got the impression that it's possible to alter a database in the server (e.g. add a new column) and later incorporate the DDL changes into your EER diagram. At least, it has a Synchronize Model option in the Database menu. I found it a nice feature because I could use a graphic modelling tool without becoming its prisoner. In practice, when I run such tool I'm offered these options: Model Update Source ================ ====== ====== my_database_name --> ! N/A my_table_name --> ! N/A N/A --> ! my_database_name N/A --> ! my_table_name I can't really understand it, but leaving it as is I basically get: DROP SCHEMA my_database_name CREATE SCHEMA my_database_name CREATE TABLE my_table_name This is dump of the model that overwrites all remote changes in my_table_name. Am I misunderstanding the feature?

    Read the article

  • ASP.NET MVC 2 RC2 Model Binding with NVARCHAR NOT NULL column

    - by Gary McGill
    I have a database column defined as NVARCHAR(1000) NOT NULL DEFAULT(N'') - in other words, a non-nullable text column with a default value of blank. I have a model class generated by the Linq-to-SQL Classes designer, which correctly identifies the property as not nullable. I have a TextAreaFor in my view for that property. I'm using UpdateModel in my controller to fetch the value from the form and populate the model object. If I view the web page and leave the text area blank, UpdateModel insists on setting the property to NULL instead of empty string. (Even if I set the value to blank in code prior to calling UpdateModel, it still overwrites that with NULL). Which, of course, causes the subsequent database update to fail. I could check all such properties for NULL after calling UpdateModel, but that seems ridiculous - surely there must be a better way? Please don't tell me I need a custom model binder for such a simple scenario...!

    Read the article

  • How to customize a many-to-many inline model in django admin

    - by Jonathan
    I'm using the admin interface to view invoices and products. To make things easy, I've set the products as inline to invoices, so I will see the related products in the invoice's form. As you can see I'm using a many-to-many relationship. In models.py: class Product(models.Model): name = models.TextField() price = models.DecimalField(max_digits=10,decimal_places=2) class Invoice(models.Model): company = models.ForeignKey(Company) customer = models.ForeignKey(Customer) products = models.ManyToManyField(Product) In admin.py: class ProductInline(admin.StackedInline): model = Invoice.products.through class InvoiceAdmin(admin.ModelAdmin): inlines = [FilteredApartmentInline,] admin.site.register(Product, ProductAdmin) The problem is that django presents the products as a table of drop down menus (one per associated product). Each drop down contains all the products listed. So if I have 5000 products and 300 are associated with a certain invoice, django actually loads 300x5000 product names. Also the table is not aesthetic. How can I change it so that it'll just display the product's name in the inline table? Which form should I override, and how?

    Read the article

  • Default value for field in Django model

    - by Daniel Garcia
    Suppose I have a model: class SomeModel(models.Model): id = models.AutoField(primary_key=True) a = models.IntegerField(max_length=10) b = models.CharField(max_length=7) Currently I am using the default admin to create/edit objects of this type. How do I set the field 'a' to have the same number as id? (default=???) Other question Suppose I have a model: event_date = models.DateTimeField( null=True) year = models.IntegerField( null=True) month = models.CharField(max_length=50, null=True) day = models.IntegerField( null=True) How can i set the year, month and day fields by default to be the same as event_date field?

    Read the article

  • Upload Image with Django Model Form

    - by jmitchel3
    I'm having difficulty uploading the following model with model form. I can upload fine in the admin but that's not all that useful for a project that limits admin access. #Models.py class Profile(models.Model): name = models.CharField(max_length=128) user = models.ForeignKey(User) profile_pic = models.ImageField(upload_to='img/profile/%Y/%m/') #views.py def create_profile(request): try: profile = Profile.objects.get(user=request.user) except: pass form = CreateProfileForm(request.POST or None, instance=profile) if form.is_valid(): new = form.save(commit=False) new.user = request.user new.save() return render_to_response('profile.html', locals(), context_instance=RequestContext(request)) #Profile.html <form enctype="multipart/form-data" method="post">{% csrf_token %} <tr><td>{{ form.as_p }}</td></tr> <tr><td><button type="submit" class="btn">Submit</button></td></tr> </form> Note: All the other data in the form saves perfectly well, the photo does not upload at all. Thank you for your help!

    Read the article

  • Terminal services and memory limits

    - by Mark Wassell
    Is there a way in Terminal Services to set limits on memory related parameters for a process. For example working set size and, possibly, if it makes sense, total virtual memory allocation for the session? To turn the question around, we have an application which cannot allocate as much virtual memory running on a terminal server as it can when running on a desktop PC (both I would expect to have a limit of 2GB for user mode address space) and I was wondering if there is another limit for processes or users on a terminal server. Perhaps even 2GB per user rather than per process.

    Read the article

  • How to prevent a javascript/backbone.js cloned model from sharing attributes

    - by user540727
    I'm working with backbone.js models, so I don't know if my question is particular to the way backbone handles cloning or if it applies to javascript in general. Basically, I need to clone a model which has an attribute property assigned an object. The problem is that when I update the parent or clone's attribute, the other model is also updated. Here is a quick example: var A = Backbone.Model.extend({}); var a = new A({'test': {'some': 'crap'}}); var b = a.clone(); a.get('test')['some'] = 'thing'; // I could also use a.set() to set the attribute with the same result console.log(JSON.stringify(a)) console.log(JSON.stringify(b)) which logs the following: {"test":{"some":"thing"}} {"test":{"some":"thing"}} I would prefer to clone a such that b won't be referencing any of its attributes. Any help would be appreciated.

    Read the article

  • Django queries: Count number of objects with FK to model instance

    - by Chris Lawlor
    This should be easy but for some reason I'm having trouble finding it. I have the following: App(models.Model): ... Release(models.Model): date = models.DateTimeField() App = models.ForeignKey(App) ... How can I query for all App objects that have at least one Release? I started typing: App.objects.all().annotate(release_count=Count('??????')).filter(release_count__gt=0) Which won't work because Count doesn't span relationships, at least as far as I can tell. BONUS: Ultimately, I'd also like to be able to sort Apps by latest release date. I'm thinking of caching the latest release date in the app to make this a little easier (and cheaper), and updating it in the Release model's save method, unless of course there is a better way. Edit: I'm using Django 1.1 - not averse to migrating to dev in anticipation of 1.2 if there is a compelling reason though.

    Read the article

  • Django unable to update model

    - by user292652
    i have the following function to override the default save function in a model match def save(self, *args, **kwargs): if self.Match_Status == "F": Team.objects.filter(pk=self.Team_one.id).update(Played=F('Played')+1) Team.objects.filter(pk=self.Team_two.id).update(Played=F('Played')+1) if self.Winner !="": Team.objects.filter(pk=self.Winner.id).update(Win=F('Win')+1, Points=F('Points')+3) else: return if self.Match_Status == "D": Team.objects.filter(pk=self.Team_one.id).update(Played=F('Played')+1, Draw = F('Draw')+1, Points=F('Points')+1) Team.objects.filter(pk=self.Team_two.id).update(Played=F('Played')+1, Draw = F('Draw')+1, Points=F('Points')+1) super(Match, self).save(*args, **kwargs) I am able to save the match model just fine but Team model does not seem to be updating at all and no error is being thrown. am i missing some thing here ?

    Read the article

  • Why does my 64-bit IIS app pool show 3 gigabytes more virtual memory than private memory?

    - by Brett
    I have an ASP.Net application that I am running on 64-bit IIS 6 on Windows XP x64. When I open performance counters after one page hit of a trivial page, I see a Private Bytes of about 88 megs, but a Virtual Bytes of about 3 Gigs. When I try the same thing with a VERY trivial ASP.Net app, I get the same result. We see something similar on Windows Server 2003 in production -- there it is an issue because we recycle when the virtual memory consumed outgrows a limit. Before we make any changes to our recycling settings, we'd like to answer the following questions: Why does the app pool grab such a large hunk of virtual memory? Is the amount of virtual memory headroom the app requests configurable? Thanks! Brett

    Read the article

  • Calling a method from within a django model save() override

    - by Jonathan
    I'm overriding a django model save() method. Within the override I'm calling another method of the same class and instance which calculates one of the instance's fields based on other fields of the same instance. class MyClass(models.Model): field1 = models.FloatField() field2 = models.FloatField() field3 = models.FloatField() def calculateField1(self) self.field1 = self.field2 + self.field3 def save(self, *args, **kwargs): self.calculateField1() super(MyClass, self).save(*args, **kwargs) The override method is called when I change the model in admin. Alas I've discovered that within calculateField1() field2 and field3 have the values of the instance from before I edited them in admin. If I enter the instance again in admin and save again, only then field1 receives the correct value as field2 and field3 are already updated. Is this the correct behavior on django's side? If yes, then how can I use the new values within calculateField1? I cannot implement the calculation within the save() as calculateField1() actually quite long and I need it to be called from elsewhere.

    Read the article

  • Django comparing model instances for equality

    - by orokusaki
    I understand that, with a singleton situation, you can perform such an operation as: spam == eggs and if spam and eggs are instances of the same class with all the same attribute values, it will return True. In a Django model, this is natural because two separate instances of a model won't ever be the same unless they have the same .pk value. The problem with this is that if a reference to an instance has attributes that have been updated by middleware somewhere along the way and it hasn't been saved, and you're trying to it to another variable holding a reference to an instance of the same model, it will return False of course because they have different values for some of the attributes. Obviously I don't need something like a singleton , but I'm wondering if there some official Djangonic (ha, a new word) method for checking this, or if I should simply check that the .pk value is the same with: spam.pk == eggs.pk I'm sorry if this was a huge waste of time, but it just seems like there might be a method for doing this, and something I'm missing that I'll regret down the road if I don't find it.

    Read the article

  • ACT Professional for Windows-Memory leak?

    - by Dan
    I have an ACT! professional for Windows V11.1, with the latest SQL service pack (SP3) and have an apparent memory leak on the server. After a restart the ACT! SQL instance (SQLSERVR) consumes almost all the available memory on the server, we have added more memory to the server (it is running under Hyper-V) but it continues to consume it all. I have not been able to connect to the SQL server instance using management studio in order to limit the amount of RAM it is allocated. Are there any potential solutions for this? or should I continue to restart the services?

    Read the article

  • Django model manager didn't work with related object when I do aggregated query

    - by Satoru.Logic
    Hi, all. I'm having trouble doing an aggregation query on a many-to-many related field. Let's begin with my models: class SortedTagManager(models.Manager): use_for_related_fields = True def get_query_set(self): orig_query_set = super(SortedTagManager, self).get_query_set() # FIXME `used` is wrongly counted return orig_query_set.distinct().annotate( used=models.Count('users')).order_by('-used') class Tag(models.Model): content = models.CharField(max_length=32, unique=True) creator = models.ForeignKey(User, related_name='tags_i_created') users = models.ManyToManyField(User, through='TaggedNote', related_name='tags_i_used') objects_sorted_by_used = SortedTagManager() class TaggedNote(models.Model): """Association table of both (Tag , Note) and (Tag, User)""" note = models.ForeignKey(Note) # Note is what's tagged in my app tag = models.ForeignKey(Tag) tagged_by = models.ForeignKey(User) class Meta: unique_together = (('note', 'tag'),) However, the value of the aggregated field used is only correct when the model is queried directly: for t in Tag.objects.all(): print t.used # this works correctly for t in user.tags_i_used.all(): print t.used #prints n^2 when it should give n Would you please tell me what's wrong with it? Thanks in advance.

    Read the article

< Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >