Search Results

Search found 21702 results on 869 pages for 'large objects'.

Page 26/869 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • Problem creating levels using inherited classes/polymorphism

    - by Adam
    I'm trying to write my level classes by having a base class that each level class inherits from...The base class uses pure virtual functions. My base class is only going to be used as a vector that'll have the inherited level classes pushed onto it...This is what my code looks like at the moment, I've tried various things and get the same result (segmentation fault). //level.h class Level { protected: Mix_Music *music; SDL_Surface *background; SDL_Surface *background2; vector<Enemy> enemy; bool loaded; int time; public: Level(); virtual ~Level(); int bgX, bgY; int bg2X, bg2Y; int width, height; virtual void load(); virtual void unload(); virtual void update(); virtual void draw(); }; //level.cpp Level::Level() { bgX = 0; bgY = 0; bg2X = 0; bg2Y = 0; width = 2048; height = 480; loaded = false; time = 0; } Level::~Level() { } //virtual functions are empty... I'm not sure exactly what I'm supposed to include in the inherited class structure, but this is what I have at the moment... //level1.h class Level1: public Level { public: Level1(); ~Level1(); void load(); void unload(); void update(); void draw(); }; //level1.cpp Level1::Level1() { } Level1::~Level1() { enemy.clear(); Mix_FreeMusic(music); SDL_FreeSurface(background); SDL_FreeSurface(background2); music = NULL; background = NULL; background2 = NULL; Mix_CloseAudio(); } void Level1::load() { music = Mix_LoadMUS("music/song1.xm"); background = loadImage("image/background.png"); background2 = loadImage("image/background2.png"); Mix_OpenAudio(48000, MIX_DEFAULT_FORMAT, 2, 4096); Mix_PlayMusic(music, -1); } void Level1::unload() { } //functions have level-specific code in them... Right now for testing purposes, I just have the main loop call Level1 level1; and use the functions, but when I run the game I get a segmentation fault. This is the first time I've tried writing inherited classes, so I know I'm doing something wrong, but I can't seem to figure out what exactly.

    Read the article

  • Question on methods in Object Oriented Programming

    - by mal
    I’m learning Java at the minute (first language), and as a project I’m looking at developing a simple puzzle game. My question relates to the methods within a class. I have my Block type class; it has its many attributes, set methods, get methods and just plain methods. There are quite a few. Then I have my main board class. At the moment it does most of the logic, positioning of sprites collision detection and then draws the sprites etc... As I am learning to program as much as I’m learning to program games I’m curious to know how much code is typically acceptable within a given method. Is there such thing as having too many methods? All my draw functionality happens in one method, should I break this into a few ‘sub’ methods? My thinking is if I find at a later stage that the for loop I’m using to cycle through the array of sprites searching for collisions in the spriteCollision() method is inefficient I code a new method and just replace the old method calls with the new one, leaving the old code intact. Is it bad practice to have a method that contains one if statement, and place the call for that method in the for loop? I’m very much in the early stages of coding/designing and I need all the help I can get! I find it a little intimidating when people are talking about throwing together a prototype in a day too! Can’t wait until I’m that good!

    Read the article

  • Getting an object from a 2d array inside of a class

    - by user36324
    I am have a class file that contains two classes, platform and platforms. platform holds the single platform information, and platforms has an 2d array of platforms. Im trying to render all of them in a for loop but it is not working. If you could kindly help me i would greatly appreciate. void Platforms::setUp() { for(int x = 0; x < tilesW; x++){ for(int y = 0; y < tilesH; y++){ Platform tempPlat(x,y,true,renderer,filename,tileSize/scaleW,tileSize/scaleH); platArray[x][y] = tempPlat; } } } void Platforms::show() { for(int x = 0; x < tilesW; x++){ for(int y = 0; y < tilesH; y++){ platArray[x][y].show(renderer,scaleW,scaleH); } } }

    Read the article

  • A more reliable and more flexible sp_MSforeachdb

    - by AaronBertrand
    I've complained about sp_MSforeachdb before. In part of my "Bad Habits to Kick" series in 2009-10, I described how I worked around its sporadic inability to actually process all of the databases on an instance: http://sqlblog.com/blogs/aaron_bertrand/archive/2010/02/08/bad-habits-to-kick-relying-on-undocumented-behavior.aspx I lumped this in a "Bad Habit" category of relying on undocumented behavior, since - while the procedure does have rampant usage - it is, in fact, both undocumented and unsupported....(read more)

    Read the article

  • XNA - How do I change the texture of a 2D object?

    - by Adorjan
    I am on to make a table game, I successfully figured out how to make the arraw and to move the cursor on it (by tiles). Now I wanna find out how to make that if I hit the Enter key the tile's texture change to another. I tryed like this: if (input.KeyPressed(Keys.Enter)) { cell[X,Y].Cell_texture = tile_texture; } but it doesn't really work. Hope you can help. :) Thanks!

    Read the article

  • Game Object Design

    - by oisin
    I'm having a problem with the way I designed my first simple game in C++. I have GameObject (abstract class) and ObjectA which inherits the update() and draw() methods from GameObject. My main loop contains a linked list of GameObject*, and while that list is not empty it cycles through it, calling update on each one. Up until this point, I thought the design was standard(?) and would work. However, when I call update on ObjectA() I run into two problems: ObjectA can die which messes up the list, which in turn throws off the loop in main. ObjectA can spawn more ObjectA's but these are local scope and the update() goes out of scope, creating problems in main's list of GameObjects. I think my design if alright, but I'm having such problems with segmentation faults that there must be something seriously wrong with at least one part of my implementation. If anyone could point out any serious mistakes or simple examples of this being done (or even alternative designs) then I would greatly appreciate it!

    Read the article

  • From a language design perspective, if Javascript objects are simply associative arrays, then why ha

    - by Christopher Altman
    I was reading about objects in O'Reilly Javascript Pocket Reference and the book made the following statement. An object is a compound data type that contains any number of properties. Javascript objects are associative arrays: they associate arbitrary data values with arbitrary names. From a language design perspective, if objects are simply associative arrays, then why have objects? I appreciate the convenience of having objects in the language, but if convenience is the main purpose for adding a data type, then how do you decide what to add and what to not add in a language? A language can quickly become bloated and less valuable if it is weighed down by several overlapping methods and data types (Is this a true statement or am I missing something).

    Read the article

  • grouping draggable objects with jquery-ui draggable

    - by Jim Robert
    Hello, I want to use jquery draggable/droppable to let the user select a group of objects (each one has a checkbox in the corner) and then drag all the selected objects as a group... I can't figure out for the life of me how to do it haha. Here is what I'm thinking will lead to a usable solution, on each draggable object, use the start() event and somehow grab all the other selected objects and add them to the selection I was also considering just making the dragged object look like a group of objects (they're images, so a pile of photos maybe) for performance reasons. I think using the draggable functionality might fall over if you drag several dozen objects at once, does that sound like a better idea?

    Read the article

  • Iterating Oracle collections of objects with out exploding them

    - by Scott Bailey
    I'm using Oracle object data types to represent a timespan or period. And I've got to do a bunch of operations that involve working with collections of periods. Iterating over collections in SQL is significantly faster than in PL/SQL. CREATE TYPE PERIOD AS OBJECT ( beginning DATE, ending DATE, ... some member functions...); CREATE TYPE PERIOD_TABLE AS TABLE OF PERIOD; -- sample usage SELECT <<period object>>.contains(period2) FROM TABLE(period_table1) t The problem is that the TABLE() function explodes the objects into scalar values, and I really need the objects instead. I could use the scalar values to recreate the objects but this would incur the overhead of re-instantiating the objects. And the period is designed to be subclassed so there would be additional difficulty trying to figure out what to initialize it as. Is there another way to do this that doesn't destroy my objects?

    Read the article

  • Serializing ActiveRecord objects without storing their attributes?

    - by Allan Grant
    I'm working on a problem where I need to store serialized hierarchies of Ruby objects in the database. Many of the objects that will need to be saved are ActiveRecord objects with a lot of attributes. Instead of saving the entire objects and then refreshing their attributes from the DB when I load them (in case they changed, which is likely), it would be easier to just store the references (class and database id) for these objects. Does anyone know if there's already a way to do this in Rails, or if there's an existing gem for it? Wanted to check if something existed before spending a ton of time hacking on it.

    Read the article

  • Get a queryset of objects through an intermediary model

    - by skyl
    I want get all of the Geom objects that are related to a certain content_object (see the function I'm trying to build at the bottom, get_geoms_for_obj() class Geom(models.Model): ... class GeomRelation(models.Model): ''' For tagging many objects to a Geom object and vice-versa''' geom = models.ForeignKey(Geom) content_type = models.ForeignKey(ContentType) object_id = models.PositiveIntegerField() content_object = generic.GenericForeignKey() def get_geoms_for_object(obj): ''' takes an object and gets the geoms that are related ''' ct = ContentType.objects.get_for_model(obj) id = obj.id grs = GeomRelation.objects.filter( content_type=ct, object_id=id ) # how with django orm magic can I build the queryset instead of list # like below to get all of the Geom objects for a given content_object geoms = [] for gr in grs: geoms.append(gr.geom) return set(geoms) # A set makes it so that I have no redundant entries but I want the # queryset ordering too .. need to make it a queryset for so many reasons...

    Read the article

  • grouping objects to achieve a similar mean property for all groups

    - by cytochrome
    I have a collection of objects, each of which has a numerical 'weight'. I would like to create groups of these objects such that each group has approximately the same arithmetic mean of object weights. The groups won't necessarily have the same number of members, but the size of groups will be within one of each other. In terms of numbers, there will be between 50 and 100 objects and the maximum group size will be about 5. Is this a well-known type of problem? It seems a bit like a knapsack or partition problem. Are efficient algorithms known to solve it? As a first step, I created a python script that achieves very crude equivalence of mean weights by sorting the objects by weight, subgrouping these objects, and then distributing a member of each subgroup to one of the final groups. I am comfortable programming in python, so if existing packages or modules exist to achieve part of this functionality, I'd appreciate hearing about them. Thank you for your help and suggestions.

    Read the article

  • Generically creating objects in C#

    - by DrLazer
    What I am trying to do is load in objects from an XML save file. The problem is those objects are configurable by the user at runtime, meaning i had to use reflection to get the names and attributes of those objects stored in an XML file. I am in the middle of a recursive loop through the XML and up to the part where I need to create an object then thought ..... ah - no idea how to do that :( I have an array stuffed with empty objects (m_MenuDataTypes), one of each possible type. My recursive loading function looks like this private void LoadMenuData(XmlNode menuDataNode) { foreach (object menuDataObject in m_MenuDataTypes) { Type menuDataObjectType = menuDataObject.GetType(); if (menuDataObjectType.Name == menuDataNode.Name) { //create object } } } I need to put some code where my comment is but I can't have a big switch statement or anything. The objects in my array can change depending on how the user has configured the app.

    Read the article

  • Find number of serialized objects

    - by tmosier
    Hello all. My issue is trying to determine a number of objects created, the objects being serialized from an XML document. The XML document should be set up for simplicity, so any developer can add an additional object and need no further modification to the code. However each of these objects need to be handled/updated seperately, and specifically, some of the objects are of different sub-classes, which need to be handled differently. Here is some pseudocode for what I am going for class A { public B myClassB; public void Update() { // for every object of whatever type in myClassB // update logic } } XML: <\data <\object1 etc... So what would be my simplest course of action, allowing other to add objects via the XML, but still ensuring the proper logic happenes for each?

    Read the article

  • PostgreSQL lots of large Arrays and Writes

    - by strife911
    Hi, I am running a python program that spawns 8 threads and as each thread launch its own postmaster process via psycopg2. This is to maximize the use of my CPU-cores (8). Each thread call a series of SQL Functions. Most of these functions go through many thousands of rows each associated to a large FLOAT8[] Array (250-300) values by using unnest() and multiplying each FLOAT8 by an another FLOAT8 associated to each row. This Array approach minimized the size of the Indexes and the Tables. The Function ends with an Insert into another Table of a row of the same form (pk INT4, array FLOAT8[]). Some SQL Functions called by python will Update a row of these kind of Tables (with large Arrays). Now I currently have configured PostgreSQL to use most of the memory for cache (effective_cache_size of 57 GB I think) and only a small amount of it for shared memory (1GB I think). First, I was wondering what the difference between Cache and Shared memory was in regards to PostgreSQL (and my application). What I have noticed is that only about 20-40% of my total CPU processing power is used during the most Read intensive parts of the application (Select unnest(array) etc). So secondly, I was wondering what I could do to improve this so that 100% of the CPU is used. Based on my observations, it does not seem to have anything to do with python or its GIL. Thanks

    Read the article

  • osx bash grep - finding search terms in a large file with one single line

    - by unsynchronized
    Is there simple unix command line i can enter which lets me isolate say 512 bytes either side of a search term, even if there is only one "line" in a very large text file? Ok, this should be easy. Famous last words. I'm not that familiar with grep, but it seems it is mainly used to filter out lines in the input that contain search terms. I have a very large json file that I downloaded that i want to search for a particular term. before you click the link - it's over 244MB so be warned - it is from the internet wayback machine and contains lists of zip files of archived photos. i am trying to find mine. Their web interface is broken, so i found the json file that they make public here - it's the last one on the list. when i grep looking for my username, it finds it, but proceeds to dump that line to the console. the problem is that line is 244MB long, and it's the only line in the file. i tried using less, but could not get that to do much - it's very slow, and seems to have the same issue. is there simple unix command line i can enter which lets me isolate say 512 bytes either side of a search term?

    Read the article

  • iis7 large worker process request queue creating process blocking aspnet.config & machine.config amended (bottleneck)

    - by scott_lotus
    ASP.net 2.0 app .net 2.0 framework IIS7 I am seeing a large queue of "requests" appear under the "worker process" option. State recorded appear to be Authenticate Request and Execute Request Handles more than anything else. I have amended aspnet.config in C:\Windows\Microsoft.NET\Framework64\v2.0.50727 (32 bit path and 64 bit path) to include: maxConcurrentRequestsPerCPU="50000" maxConcurrentThreadsPerCPU="0" requestQueueLimit="50000" I have amended machine.config in C:\Windows\Microsoft.NET\Framework64\v2.0.50727\CONFIG (32 bit and 64 bit path) to include: autoConfig="true" maxIoThreads="100" maxWorkerThreads="100" minIoThreads="50" minWorkerThreads="50" minFreeThreads="176" minLocalRequestFreeThreads="152" Still i get the issue. The issue manifestes itself as a large number of processes in the Worker Process queue. Number of current connections to the website display 500 when this issue occurs. I dont think i have seen concurrent connections over 500 without this issue occurring. Web application slows as the request block. Refreshing the app pool resolves for a while (as expected) as the load is spread between the two pools. Application pool in question FIXED REQUEST have been set to refresh on 50000. Thank you for any help. Scott quick edit to say hmm, my develeopers are telling me the project was built with .net 3.5 framework. Looking at C:\Windows\Microsoft.NET\Framework64\v3.5 there does not appear to be a ASPNET.CONFIG or a MACHINE.CONFIG .... is there a 3.5 equivalent ? after a little searching apparenetly 3.5 uses the 2.0 framework files that 3.5 is missing. So back to the original question , where is my bottleneck ?

    Read the article

  • Load Sharing Regarding Large Websites

    - by JHarley1
    Hello, I have a question regarding Load Sharing for large websites. My Understanding: So if you have a website that has millions of fits a day you will need to have an architecture that can support this sort of pressure. You can either do one or two things: Invest in a single large server that has huge amounts of processing power, memory and storage (such as Microsoft's TerraServer). Spread the load of your website across a number of machines. Let me tackle the second approach, so you have a collection of machines all running Web Server Software and all having access to identical copies of the websites pages. You can either spread the load across these machines using a cyclic pattern in a DNS or you can use a Load Ballancing Switch. The advantages of this approach is: - Redundancy - servers can fail and the others would "pick up the slack" - Incremental - the ability to easily add new machines to this set-up. My Question's Is there a Virtual approach to this issue of load balancing now? If the website runs from a database - is there still only a single copy of the database? If a user had a session running on one Server (e.g. they had gone to www.example.org and had been assigned to Server 2 - were they had created a session) if they refreshed the website (and were allocated Server 3) would they still have their session? What are the other disadvantages associated with Load Balancing? Many Thanks, J

    Read the article

  • Error importing large MySQL dump file which includes binary BLOBs in Windows

    - by Daniel Magliola
    I'm trying to import a MySQL dump file, which I got from my hosting company, into my Windows dev machine, and i'm running into problems. I'm importing this from the command line, and i'm getting a very weird error: ERROR 2005 (HY000) at line 3118: Unknown MySQL server host '+?*á±dÆ-N+Æ·h^ye"p-i+ Z+-$?P+Y.8+|?+l8/l¦¦î7æ¦X¦XE.ºG[ ;-ï?éµ?º+¦¦].?+f9d릦'+ÿG?-0à¡úè?-?ù??¥'+NÑ' (11004) I'm attaching the screenshot because i'm assuming the binary data will get lost... I'm not exactly sure what the problem is, but two potential issues are the size of the file (2 Gb) which is not insanely large, but it's not trivially small either, and the other is the fact that many of these tables have JPG images in them (which is why the file is 2Gb large, for the most part). Also, the dump was taken in a Linux machine and I'm importing this into Windows, not sure if that could add to the problems (I understand it shouldn't) Now, that binary garbage is why I think the images in the file might be a problem, but i've been able to import similar dumps from the same hosting company in the past, so i'm not sure what might be the issue. Also, trying to look into this file (and line 3118 in particular) is kind of impossible given its size (i'm not really handy with Linux command line tools like grep, sed, etc). The file might be corrupted, but i'm not exactly sure how to check it. What I downloaded was a .gz file, which I "tested" with WinRar and it says it looks OK (i'm assuming gz has some kind of CRC). If you can think of a better way to test it, I'd love to try that. Any ideas what could be going on / how to get past this error? I'm not very attached to the data in particular, since I just want this as a copy for dev, so if I have to lose a few records, i'm fine with that, as long as the schema remains perfectly sound. Thanks! Daniel

    Read the article

  • How do I import large sql file to local LAMP (xampp) environment

    - by mraslton
    I have used Linux to import a large mysql dump file (into a new database), but am new to how the process works in a local LAMP environment using xampp, as xampp does not support SSH. I've dowloaded the large_dump_file.sql from the Linux server to my local system. I'm using Windows XP and have used xampp to setup LAMP. I am able to access the local_database via phpMyAdmin, but the dump file is too large to import using that app. I'm trying to import the file via the command prompt, but so far with no success. At the prompt: cd .. cd .. cd xampp cd mysql cd bin I've found that mysqlimport is used to import .csv and .txt files, and mysql is used to import .sql files, but can't find documentation as to whether or not to use the -u -p options so I've tried many variations of the command with no luck. What would be the proper command? I've modified the hosts, virtual-hosts conf, and apache config files. Do I need to change any other config files on my local system? Thanks.

    Read the article

  • How to configure a large mtu (linux)

    - by Somejan
    I have a gigabit ethernet connection from my laptop to my router, and a working ipv6 connection to the internet. I can receive very large packets from sites on the internet, with sizes up to at least 10000 bytes (according to wireshark). (edit: turns out to be linux's 'generic receive offload') However, when trying to send anything, my local computer fragments at just below 1500 bytes for ipv6. (On ipv4, I can send tcp packets to the internet of at least 1514 bytes, I can ping with packets up to the configured mtu of 6128 but they are blackholed.) I'm on ubuntu 12.04. I have configured an mtu for my eth0 of 6128 (the maximum it accepts), both using ip link set dev eth0 mtu 6128 and in the NetworkManager applet gui, and restarted the connection. ip link show eth0 shows the 6128 mtu is indeed set. ip -6 route shows that none of the paths the kernel knows about have an mtu set. I can ping over ipv4 with packets up to 6128 bytes (though I don't get responses), but when I do ping6 myrouter -c3 -s1500 -Mdo I get error replies from my own computer saying that the packets are too large and the mtu is 1480. I have confirmed with Wireshark that nothing is put on the wire, and the replies are indeed generated by my own computer. So, how do I get my computer to use the larger mtu?

    Read the article

  • Export-Mailbox - fails with large folders

    - by grojo
    I am trying to move messages from a rather large mailbox to an archive mailbox. However I run into errors all the time. the command I am executing is Export-Mailbox -Identity MAILBOX_FROM -TargetMailbox ARCHIVE -TargetFolder ARCHIVE_FOLDER -StartDate 2009-02-01 -EndDate 2009-02-28 -DeleteContent -Confirm:$false I can copy/move some messages, but run into frequent "an unknown error has occurred" (statuscode -1056749164) I run the console as administrative user, and all permissions are set right, as far as I can tell. I've restricted the start and end dates in case the number of messages moved/deleted should create problems. Anything I am missing in my setup? Corrupted messages? Over-limit message sizes? Update: What I've learnt so far, is that folder with more than approx 3000 messages will generate errors. If mail retention is set (default 30 days), Export-Mailbox will scan all messages whether these were deleted in previous runs or not, and date restriction to limit number of messages will not work. To avoid errors, I've switched off deleted message retention for the mailbox, and moved the messages from one large folder to multiple folders, and moved these one by one...

    Read the article

  • File corruption (bad checksums) in large files copied to VMware guest

    - by AllanA
    In setting up a development lab, I've got a desktop system running ESXi 4.1.0 (free license) on SATA RAID 0 (already purchased and configured when I started this job; I'm open to hardware input as it pertains to my problem.) Its guests so far include two Win2008 Server R2 64-bit VMs and on Ubuntu 10.04 64-bit VM. I'm installing onto the Windows servers. We've been copying off some fairly large files (over a gigabyte) for an installation, hoping to install more quickly from a (virtual) hard drive than from the network for from BD-ROM. The problem is that they keep coming up with different checksums from the originals. The file sizes are the same, but md5sum reports different numbers (and so does the installer, as it refuses to continue when the checksums don't match.) I've tried copying directly from the BD-ROM (attaching the OS drive to the host system's physical drive). I've tried copying the large files onto a co-worker's Windows machine from his Blu-Ray drive; when I do that, the checksums match. But when I copy from his machine to the VM guest over a network share, the checksums no longer match. Thinking this meant a corrupt destination drive, I deleted it in vSphere and added another freshly created drive. The problem persists. I'm not sure what to try next.

    Read the article

  • Copy past speed very slow for a large number of files on Windows [closed]

    - by Arno2501
    I've run the following test I've created a folder containing 15'000 files of 400 bytes using this batch : @ECHO off SET times=15000 FOR /L %%i IN (1,1,%times%) DO ( fsutil file createnew filename%%i.txt 400 ) then I copy past it on my Windows Computer using this command : robocopy LargeNumberOfFiles\ LargeNumberOfFiles2\ After it has completed I can see that the transfer rate was 915810 Bytes/sec this is less than 1 MB/s. It took me several seconds to copy 7 MBytes Please note that this is very slow. I've tried the same with a folder with a single file of 50 Mbytes and the transfer rate is 1219512195 Bytes/sec. (yeah GB/s) instantaneous. Why copying large number of files take so much time - ressources on a windows filesystem ? Please note that I've tried to do the same on a linux system which runs on the same computer in a virtual machine (vmware player) with ext3 filesystem. I use the cp command and the copy is instantaneous ! Please also note the following : no antivirus I've tested that behaviour on multiple windows computers (always ntfs) i always get comparable results (transfer rate under 1MB/s avg 7-8 seconds to copy 7 MBytes) I've tested on multiple linux ext3 system the copy is always instantaneous for that amount (15000 files of 400 bytes) The question is about understanding what makes windows filesystem so slow to copy large number of files compared to a linux one for instance.

    Read the article

  • Setting objects (not users) inactive after period of time in asp.net mvc

    - by bastijn
    This question is mainly to verify my current idea. I have a series of objects which I want to be active for a specified amount of time. For instance objects like ads which are shown in the ad space only for the amount of time bought, objects in search results which should only pop up when active, and frontpage posts which should be set to inactive after a prespecified time. My current idea is to just give those type of objects a StartDate and EndDate and filter the search routines to only show results which fall in the range StartDate < currentDate < EndDate. Is this the normal structure or should there be some sort of auto-routine which routinely checks for objects which are "over-time" and set a property "inactive" to true or something. Seems like this approach is such a hassle since I need a checker which runs say, every 5 minutes, to scan all DB objects. Seems like a bad idea to me. So is the first structure the most commonly used or are there any other options? When searching on google or SO the search queries only return results setting users inactive.

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >