Search Results

Search found 10285 results on 412 pages for 'cpu architecture'.

Page 360/412 | < Previous Page | 356 357 358 359 360 361 362 363 364 365 366 367  | Next Page >

  • How do I subscribe to a MSMQ queue but only "peek" the message in .Net?

    - by lemkepf
    We have a MSMQ Queue setup that receives messages and is processed by an application. We'd like to have another process subscribe to the Queue and just read the message and log it's contents. I have this in place already, the problem is it's constantly peeking the queue. CPU on the server when this is running is around 40%. The mqsvc.exe runs at 30% and this app runs at 10%. I'd rather have something that just waits for a message to come in, get's notified of it, and then logs it without constantly pooling the server. Dim lastid As String Dim objQueue As MessageQueue Dim strQueueName As String Public Sub Main() objQueue = New MessageQueue(strQueueName, QueueAccessMode.SendAndReceive) Dim propertyFilter As New MessagePropertyFilter propertyFilter.ArrivedTime = True propertyFilter.Body = True propertyFilter.Id = True propertyFilter.LookupId = True objQueue.MessageReadPropertyFilter = propertyFilter objQueue.Formatter = New ActiveXMessageFormatter AddHandler objQueue.PeekCompleted, AddressOf MessageFound objQueue.BeginPeek() end main Public Sub MessageFound(ByVal s As Object, ByVal args As PeekCompletedEventArgs) Dim oQueue As MessageQueue Dim oMessage As Message ' Retrieve the queue from which the message originated oQueue = CType(s, MessageQueue) oMessage = oQueue.EndPeek(args.AsyncResult) If oMessage.LookupId <> lastid Then ' Process the message here lastid = oMessage.LookupId ' let's write it out log.write(oMessage) End If objQueue.BeginPeek() End Sub

    Read the article

  • Terrible DotNetNuke performance

    - by Peter Bridger
    I'm involved with a project using DotNetNuke version 05.01.04 Community Edition. We are building our new Intranet using it, but performance is terrible. We have five people adding pages and content to it and every 15-30 seconds they experience a pause of 10 seconds or longer before the system continues and the next screens loads. The server is Windows 2003, 3.8GHz with 1GB of RAM. I'm told by our server admin that the CPU and memory performance don't appear to be the bottleneck. We currently have 350 pages in the system, we a plan to add 1000. So we need to resolve this performance problem so that we can enter content and so we can go live. I just can't see where the bottleneck is. Is there a good why to determine the bottleneck when using DotNetNuke? Modules installed Publish:Engage (Not currently in use) Page Blaster (Doesn't appear to providing caching when users logged in using Integrated Authentication) SimpleGallery XMod Content Manager IIS Setup Application recycling completely disabled (Apart from a 2am recycle) New findings: 18th March 2010 The main bottleneck was due to version 5.1.4 having a bug which caused 1300 database roundtrips on an average page, due to broken database in-memory caching. We've upgraded to 5.2.4 which has resolved this bottleneck. Now the next biggest bottleneck is the navigation. We've used both DDR:Menu and DDN:Nav, but both have a major impact on performance. Is there a navigation interface out there that doesn't drain performance so badly?

    Read the article

  • Replace Temp with Query

    - by student
    The Replace Temp with Query refactoring method is recommended quite widely now but seems to be very inefficient for very little gain. The method from the Martin Fowler's site gives the following example: Extract the expression into a method. Replace all references to the temp with the expression. The new method can then be used in other methods. double basePrice = _quantity * _itemPrice; if (basePrice > 1000) return basePrice * 0.95; else return basePrice * 0.98; becomes if (basePrice() > 1000) return basePrice() * 0.95; else return basePrice() * 0.98; double basePrice() { return _quantity * _itemPrice; } Why is this a good idea? surely it means the calculation is needlessly repeated and you have the overhead of calling a function. I know CPU cycles are cheap but throwing them away like this seems careless? Am I missing something?

    Read the article

  • Make Python Socket Server More Efficient

    - by BenMills
    I have very little experience working with sockets and multithreaded programming so to learn more I decided to see if I could hack together a little python socket server to power a chat room. I ended up getting it working pretty well but then I noticed my server's CPU usage spiked up over 100% when I had it running in the background. Here is my code in full: http://gist.github.com/332132 I know this is a pretty open ended question so besides just helping with my code are there any good articles I could read that could help me learn more about this? My full code: import select import socket import sys import threading from daemon import Daemon class Server: def __init__(self): self.host = '' self.port = 9998 self.backlog = 5 self.size = 1024 self.server = None self.threads = [] self.send_count = 0 def open_socket(self): try: self.server = socket.socket(socket.AF_INET6, socket.SOCK_STREAM) self.server.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) self.server.bind((self.host,self.port)) self.server.listen(5) print "Server Started..." except socket.error, (value,message): if self.server: self.server.close() print "Could not open socket: " + message sys.exit(1) def remove_thread(self, t): t.join() def send_to_children(self, msg): self.send_count = 0 for t in self.threads: t.send_msg(msg) print 'Sent to '+str(self.send_count)+" of "+str(len(self.threads)) def run(self): self.open_socket() input = [self.server,sys.stdin] running = 1 while running: inputready,outputready,exceptready = select.select(input,[],[]) for s in inputready: if s == self.server: # handle the server socket c = Client(self.server.accept(), self) c.start() self.threads.append(c) print "Num of clients: "+str(len(self.threads)) self.server.close() for c in self.threads: c.join() class Client(threading.Thread): def __init__(self,(client,address), server): threading.Thread.__init__(self) self.client = client self.address = address self.size = 1024 self.server = server self.running = True def send_msg(self, msg): if self.running: self.client.send(msg) self.server.send_count += 1 def run(self): while self.running: data = self.client.recv(self.size) if data: print data self.server.send_to_children(data) else: self.running = False self.server.threads.remove(self) self.client.close() """ Run Server """ class DaemonServer(Daemon): def run(self): s = Server() s.run() if __name__ == "__main__": d = DaemonServer('/var/servers/fserver.pid') if len(sys.argv) == 2: if 'start' == sys.argv[1]: d.start() elif 'stop' == sys.argv[1]: d.stop() elif 'restart' == sys.argv[1]: d.restart() else: print "Unknown command" sys.exit(2) sys.exit(0) else: print "usage: %s start|stop|restart" % sys.argv[0] sys.exit(2)

    Read the article

  • Why does Tfs2010 build my Wix project before anything else?

    - by bwerks
    Hi all, A similar question was asked and answered about a year ago, but was either a different issue (everything was in beta) or misdiagnosed. It's located here: http://stackoverflow.com/questions/688162/msbuild-task-fails-because-any-cpu-solution-is-built-out-of-order. My issue is that I have a wix installer project, and after upgrading to Tfs2010 on monday, the build fails on linking because it can't find the build product of the Wpf application in the project. After some digging, it's because it hasn't been built yet. Building in Vs2010 works as normal. The wix project is set to depend on the Wpf project, and when viewing Project Build Order in the IDE, everything looks as normal. The problem was originally encountered with only two platform definitions in the solution; x86 and x64. There are also two flavors, Debug and Release, and TFSBuild.proj is set to build all four combinations. There was no occurence of AnyCPU anywhere. Per the referenced question above, I tried changing the Wpf project to use AnyCPU so that it would be built first. At this point, the wix project used the exact configuration and the Wpf project used the flavor with AnyCPU. However, doing so didn't seem to change anything. I'm using the Tfs2010 RTM, Vs2010 RTM, and the most recent version of Wix, which at the time of this writing is 3.5.1602.0, from 2010-04-02. Anyone else running into this?

    Read the article

  • nHibernate strategies in a web farm

    - by Pete Nelson
    Our current project at work is a new MVC web site that will use a WCF service primarily to access a 3rd party billing system via a web service as well as a small SQL database for user personalization. The WCF service uses nHibernate for the SQL database. We'd like to implement some sort of web farm for load balancing as well as failover and maintenance. I'm trying to decide the best way to handle nHibernate's caching and database concurrency if there are multiple WCF services running. Some scenarios I've been thinking about... 1) Multiple IIS servers, one WCF server. With this setup, the WCF server would be a single point of failure, but there would be no issues with nHibernate caching or database concurrency. 2) Multiple IIS servers, each with it's own WCF service. This removes a single point of failure, but now nHibernate on one machine would not know about database changes done by another machine. Some solutions to number 2 would be to use an IStatelessSession so we're not doing any caching and nHibernate is always fetching directly from the database. This might be the most feasible as our personalization database has very few objects in it. I'm also considering a 2nd-level cache such as memcached or Velocity, but it may be overkill for this system. I'm putting this out there to see if anyone has experience doing this sort of architecture and to get some ideas for a solution. Thanks!

    Read the article

  • Integration tests in Continuous Integration environment: Database and filesystem state

    - by dario_ramos
    I'm trying to implement automated integration tests for my application. It's a very complex monster. You could say that its database and part of the filesystem are part of its state, because it saves image files in the hard drive, and references to those in the DB. The software needs all those, in a coherent state, to work properly. Back to writing tests: To run any relevant test, I need some image files in the filesystem, and certain records filled in the database. I thought of putting all of these in a separate folder called TestEnvironmentData in the repository, and retrieving them from the Continuous Integration Server (Team City), but a colleague said the repo is quite full as it is, and that I should set up a special directory, and databases, only in the Continuous Integration server. I don't like that because the tests success depend on me manually mantaining stuff in the server, and restoring initial state before every test becomes cumbersome. What do you guys do when you need to write integration tests for an app like this? The main goal is having an automated test harness to approach a large scale refactoring. There's lots of spaghetti code and the app's current architecture is hardly unit testable, that's why I decided on integration tests first. Any alternative approach is welcome.

    Read the article

  • Flash/Flex: "Warning: filter will not render" problem

    - by davidemm
    In my flex application, I have a custom TitleWindow that pops up in modal fashion. When I resize the browser window, I get this warning: Warning: Filter will not render. The DisplayObject’s filtered dimensions (1286, 107374879) are too large to be drawn. Clearly, I have nothing set with a height of 107374879. After that, any time I mouse over anything in the Flash Player (v. 10), the CPU churns at 100%. When I close the TitleWindow, the problem subsides. Sadly, the warning doesn't seem to indicate which DisplayObject object is too large to draw. I've tried attaching explicit height/widths to the TitleWindow and the components within, but still no luck. [Edit] The plot thickens: I found that the problem only occures when I set the PopUpManager's createPopUp modal parameter to "true." I don't see the behavior when modal is set to "false." It's failing while applying the graying filter to other components that comes from being modal. Any ideas how I can track down the one object that has not been initialized but is being filter during the modal phase? Thanks for reading.

    Read the article

  • Team Foundation Server - A programmer's guide

    - by Filip Ekberg
    In addition to my Previous topic on How to use SVN, Branch? Tag? Trunk? I would like to get in-depth on how a programmer should/could use TFS. The things that are most interesting to me is not how to set up the server, rather how you use it on a daily basis. In the area of software engineering where your responsibility not only lies on code but achitecture, documentation and other fields, you need to have a collection of your work, prefferably on the same place. So these are my point of interest which I would like to get more knowledge about How would you strucuter a TFS Workspace / Project to support lots of different customers / projects and maybe different projects per customer? Splitting up the folder strucutre on the above project into different pieces such as, Code, Documents - Architecture, Requirements and other, what more could there be and what would be a nice commonly used folder structure? An easy to browse repository; Again the folder structure here is important however this point is more aimed at different Explorers for the repository, not only the built in Team Foundation Explorer. These are just a couple of the points that I would like to know more about, suggestions on Beginners guides, in-depth guides and links covering the above would be very much helpful, please feel free to add other important knowledge-points to this as well.

    Read the article

  • RESTful idempotence

    - by DutrowLLC
    I'm designing a RESTful web service utilizing ROA(Resource oriented architecture). I'm trying to work out an efficient way to guarantee idempotence for PUT requests that create new resources in cases that the server designates the resource key. From my understanding, the traditional approach is to create a type of transaction resource such as /CREATE_PERSON. The the client-server interaction for creating a new person resource would be in two parts: Step 1: Get unique transaction id for creating the new PERSON resource::: **Client request:** GET /CREATE_PERSON **Server response:** 200 OK transaction-id:"as8yfasiob" Step 2: Create the new person resource in a request guaranteed to be unique by using the transaction id::: **Client request** PUT /CREATE_PERSON/{transaction_id} first_name="Big bubba" **Server response** 201 Created // (If the request is a duplicate, it would send this PersonKey="398u4nsdf" // same response without creating a new resource. It // would perhaps send an error response if the was used // on a transaction id non-duplicate request, but I have // control over the client, so I can guarantee that this // won't happen) The problem that I see with this approach is that it requires sending two requests to the server in order to do to single operation of creating a new PERSON resource. This creates a performance issues increasing the chance that the user will be waiting around for the client to complete their request. I've been trying to hash out ideas for eliminating the first step such as pre-sending transaction-id's with each request, but most of my ideas have other issues or involve sacrificing the statelessness of the application. Is there a way to do this?

    Read the article

  • Linux Kernel wait_for_completion_timeout not wakeup by complete

    - by Jun Li
    I am working on a strange issue with the i2c-omap driver. I am not sure if the problem happens at other time or not, but it happens around 5% of the time I tried to power off the system. During system power off, I write to some registers in the PMIC via I2C. In i2c-omap.c, I can see that the calling thread is waiting on wait_for_completion_timeout with a timeout value set to 1 second. And I can see the IRQ called "complete" (I added printk AFTER "complete"). However, after "complete" gets called, the wait_for_completion_timeout did not return. Instead, it takes up to 5 MINUTES before it returns. And the return value of wait_for_completion_timeout is positive indicating that there is no timeout. And the whole I2C transaction was successful. In the meantime, I can see printk messages from other drivers. And the serial console still works. It is on Android, and if I use "top" I can see system_server is taking about 95% of the CPU. Killing system_server can make the wait_for_completion_timeout return immediately. So my question is what could a user space app (system_server) do to make a kernel "wait_for_completion_timeout" not being wake up? Thanks!

    Read the article

  • Windows system monitoring and profiling

    - by Aris
    I have several dozen 64-bit Windows 2003 servers in a high performance environment with very bursty system utilization. I am looking for a tool (or tools) to monitor and analyze system performance (eg, CPU utilization, bandwidth, etc). This tool can either query servers from a central location (SNMP) or require installation of a component on each server. It should poll on a 1-second interval. It should be able to generate pretty graphs which show trends over time. As a nice-to-have, it should be able to send out emails or IMs when certain thresholds are exceeded. The tools I have investigated so far, including Solarwinds and PRTG, are not designed to poll this frequently. They seem to be designed for a ~30-second interval. Solarwinds wouldn't go lower than 3-sec, and PRTG chokes at 1-sec. They both default to 1-min. All three tools seem more focused on outage monitoring and reporting than metric collection. Given the bursty nature of our applications, such infrequent polling would result in a very inaccurate picture of performance. I am considering rolling my own solution using perfmon. This would be a lot of work, and seems like reinventing the wheel. Are there any tools out there that meet my needs?

    Read the article

  • ACL architechture for a Software As a service in Spring 3.0

    - by geoaxis
    I am making a software as a service using Spring 3.0 (Spring MVC, Spring Security, Spring Roo, Hibernate) I have to come up with a flexible access control list mechanism.I have three different kinds of users System (who can do any thing to the system, includes admin and internal daemons) Operations (who can add and delete users, organizations, and do maintenance work on behalf of users and organizations) End Users (they belong to one or more organization, for each organization, the user can have one or more roles, like being organization admin, or organization read-only member) (role like orgadmin can also add users for that organization) Now my question is, how should i model the entity of User? If I just take the End User, it can belong to one or more organizations, so each user can contain a set of references to its organizations. But how do we model the users role for each organization, So for example User UX belongs to organizations og1, og2 and og3, and for og1 he is both orgadmin, and org-read-only-user, where as for og2 he is only orgadmin and for og3 he is only org-read-only-user I have the possibility of making each user belong to one organization alone, but that's making the system bounded and I don't like that idea (although i would still satisfy the requirement) If you have a better extensible ACL architecture, please suggest it. Since its a software as a service, one would expect that alot of different organizations would be part if the same system. I had one concern that it is not a good idea to keep og1 and og2 data on the same DB (if og1 decides to spawn a 100 reports on the system, og2 should not suffer) But that is some thing advanced for now and is not directly related to ACL but to the physical distribution of data and setup of services based on those ACLs This is a community Wiki question, please correct any thing which you wish to do so. Thanks

    Read the article

  • Problem with non blocking fifo in bash

    - by timdel
    Hi! I'm running a few Team Fortress 2 servers and I want to write a little management script. Basically the TF2 servers are a fg process which provides a server console, so I can start the server, type status and get an answer from it: ***@purple:~/tf2$ ./start_server_testing Auto detecting CPU Using AMD Optimised binary. Server will auto-restart if there is a crash. Console initialized. [bla bla bla] Connection to Steam servers successful. VAC secure mode is activated. status hostname: Team Fortress version : 1.0.6.1/15 3883 secure udp/ip : ***.***.133.31:27600 map : ctf_2fort at: 0 x, 0 y, 0 z players : 0 (2 max) # userid name uniqueid connected ping loss state adr Great, now I want to create a script which sends the command sm_reloadadmins to all my servers. The best way I found to do this is using a fifo named pipe. Now what I want to do is having this pipe readonly and non blocking to the server process, so I can write into the pipe and the server executes it, but still I want to write via console one the server, so if I switch back to the fg process of the server and I type status I want an answer printed. I tried this (assuming serverfifo is mkfifo serverfifo): ./start_server_testing < serverfifo Not working, the server won't start until something is written to the pipe. ./start_server_testing <> serverfifo Thats actually working pretty good, I can see the console output of the server and I can write to the fifo and the server executes the commands, but I can't write via console to the server anymore. Also, if I write 'exit' to the pipe (which should end the server) and I'm running it in a screen the screen window is getting killed for some reason (wtf why?). I only need the server to read the fifo without blocking AND all my keyboard input on the server itself should be send to the server AND all server ouput should be written to the console. Is that possible? If yes, how?

    Read the article

  • How to keep unreachable code?

    - by Gabriel
    I'd like to write a function that would have some optional code to execute or not depending on user settings. The function is cpu-intensive and having ifs in it would be slow since the branch predictor is not that good. My idea is making a copy in memory of the function and replace NOPs with jumps when I don't want to execute some code. My working example goes like this: int Test() { int x = 2; for (int i=0 ; i<10 ; i++) { x *= 2; __asm {NOP}; // to skip it replace this __asm {NOP}; // by JMP 2 (after the goto) x *= 2; // Op to skip or not x *= 2; } return x; } In my test's main, I copy this function into a newly allocated executable memory and replace the NOPs by a JMP 2 so that the following x *= 2 is not executed. The problem is that I would have to change the JMP operand every time I change the code to be skipped. An alternative that would fix this problem would be: __asm {NOP}; // to skip it replace this __asm {NOP}; // by JMP 2 (after the goto) goto dont_do_it; x *= 2; // Op to skip or not dont_do_it: x *= 2; This way, as a goto uses 2 bytes of binary, I would be able to replace the NOPs by a fixed JMP of alway 2 in order to skip the goto. Unfortunately, in full optimization mode, the goto and the x*=2 are removed because they are unreachable at compilation time. Hence the need to keep that dead code.

    Read the article

  • Debugging SQL Server Slowness: Same Database, Different Servers

    - by Craig Walker
    For a while now we've been having anecdotal slowness on our newly-minted (VMWare-based) SQL Server 2005 database servers. Recently the problem has come to a head and I've started looking for the root cause of the issue. Here's the weird part: on the stored procedure that I'm using as a performance test case, I get a 30x difference in the execution speed depending on which DB server I run it on. This is using the same database (mdf) and log (ldf) files, detached, copied, and reattached from the slow server to the fast one. This doesn't appear to be a (virtualized) hardware issue: he slow server has 4x the CPU capacity and 2x the memory as the fast one. As best as I can tell, the problem lies in the environment/configuration of the servers (either operating system or SQL Server installation). However, I've checked a bunch of variables (SQL Server config options, running services, disk fragmentation) and found nothing that has made a difference in testing. What things should I be looking at? What tools can I use to investigate why this is happening?

    Read the article

  • Could not load file or assembly 'GMap.NET.Core' or one of its dependencies. An attempt was made to load a program with an incorrect format.

    - by Sam M
    I have a wcf Service application in VS2010.My local machine is a 32 bit OS where as the server is a 64 bit. There are around 6 services in my solution. Im successfully able to host the application on IIS on my local machine.And it works fine. But when i try host that service application on Server i gets the below error Could not load file or assembly 'GMap.NET.Core' or one of its dependencies. An attempt was made to load a program with an incorrect format. I do have reference added in my solution for GMap.NET.Core . I have tried to set the properties in my solution to Any CPU . Also in the application pool i have set the Enable 32-Bit Application to True. i have also set the Copy Local to TRUE in my solution before publishing. When i run the source on through my solution i dont get any error and the solution is built successfully. What else can i try to get my services successfully hosted on the Server and should be accessed through my application.

    Read the article

  • Separation of multipage tiff with compression "CCITT T.6" very slow

    - by Alex
    I need to separate multiframe tiff files, and use the following method: public static Image[] GetFrames(Image sourceImage) { Guid objGuid = sourceImage.FrameDimensionsList[0]; FrameDimension objDimension = new FrameDimension(objGuid); int frameCount = sourceImage.GetFrameCount(objDimension); Image[] images = new Image[frameCount]; for (int i = 0; i < frameCount; i++) { MemoryStream ms = new MemoryStream(); sourceImage.SelectActiveFrame(objDimension, i); sourceImage.Save(ms, ImageFormat.Tiff); images[i] = Image.FromStream(ms); } return images; } It works fine, but if the source image was encoded using the CCITT T.6 compression, separating a 20-frame-file takes up to 15 seconds on my 2,5ghz CPU.(edit: One core is at 100% during the process) When saving the images afterward to a single file using standard compression (LZW), the separation time of the LZW-file is under 1 second. Saving with CCITT compression also takes very long. Is there a way to speed up the process?

    Read the article

  • Backend raising (INotify)PropertyChanged events to all connected clients?

    - by Jörg Battermann
    One of our 'frontend' developers keeps requesting from us backend developers that the backend notifies all connected clients (it's a client/server environment) of changes to objects. As in: whenever one user makes a change, all other connected clients must be notified immediately of the change. At the moment our architecture does not have a notification system of that kind and we don't have a sort of pub/sub model for explicitly chosen objects (e.g. the one the frontend is currently implementing).. which would make sense in such a usecase imho, but obviously requires extra implementation. However, I thought frontends typically check for locks for concurrently existing user changes on the same object and rather pull for changes / load on demand and in the background rather than the backend pushing all changes to all clients for all objects constantly.. which seems rather excessive to me. However, it's being argumented that e.g. the MS Entity Framework does in fact publish (INotify)PropertyChanged not only for local changes, but for all such changes including other client connections, but I have found no proof or details regarding this. Can anyone shed some light into this? Do other e.g. ORMs etc provide broadcasted (INotify)PropertyChanged events on entities?

    Read the article

  • Java equivalent to VS solution file

    - by Chris
    I'm a C# guy trying to learn Java. I understand the syntax and the basic architecture of the Java platform, and have no problem doing smaller projects myself, but I'd really like to be able to download some open source projects to learn from the work of others. However, I'm running into a stumbling block that I can't seem to find any information on. When I download an open source .NET project, I can open the .sln file with visual studio and everything just loads. Sure, there's occasionally a missing reference or something, but there's really very little configuration required to get things going. I'm not sensing the same ease of use with Java. I'm using eclipse at the moment, and it feels like for every project I have to create a brand new Eclipse project using "create from existing source", and almost nothing compiles properly without significant reconfiguration. In the case of web projects, it's even worse, because Eclipse doesn't appear to support creating a web project from existing source. I have to create a standard Java project from source, then then apparently modify the project file to include the bindings for the web toolkit stuff to work properly. Assuming I want to be able to contribute to a project later on, I shouldn't have to be making such drastic changes to the file structure to get my IDE to a workable state. What am I missing?

    Read the article

  • Transition from 2D to 3D later in game development

    - by Axarydax
    Hi, I'd like to work on a game, but for rapidly prototyping it, I'd like to keep it as simple as possible, so I'd do everything in top-down 2D in GDI+ and WinForms (hey, I like them!), so I can concentrate on the logic and architecture of the game itself. I thinking about having the whole game logic (server) in one assembly, where the WinForms app would be a client to that game, and if/when the time is right, I'd write a 3D client. I am tempted to use XNA, but I haven't really looked into it, so I don't know if it won't take too much time getting up to speed - I really don't want to spent much time doing other stuff than the game logic, at least while I have the inspiration. But I wouldn't have to abandon everything and transfer to new platform when transitioning from 2D to 3D. Another idea is just to get over it and learn XNA/Unity/SDL/something at least to that level so I can make the same 2D version as I could in GDI+, and I won't have to worry about switching frameworks anymore. Let's just say that the game is the kind where you watch a dude from behind, you run around the gameworld and interact with objects. So the bird's eye perspective could be doable for now. Thanks.

    Read the article

  • Oracle rownum in db2 - Java data archiving

    - by HonorGod
    I have a data archiving process in java that moves data between db2 and sybase. FYI - This is not done through any import/export process because there are several conditions on each table that are available on run-time and so this process is developed in java. Right now I have single DatabaseReader and DatabaseWriter defined for each source and destination combination so that data is moved in multiple threads. I guess I wanted to expand this further where I can have Multiple DatabaseReaders and Multiple DatabaseWriters defined for each source and destination combination. So, for example if the source data is about 100 rows and I defined 10 readers and 10 writer, each reader will read 10 rows and give them to the writer. I hope process will give me extreme performance depending on the resources available on the server [CPU, Memory etc]. But I guess the problem is these source tables do not have primary keys and it is extremely difficult to grab rows in multiple sets. Oracle provides rownum concept and i guess the life is much simpler there....but how about db2? How can I achieve this behavior with db2? Is there a way to say fetch first 10 records and then fetch next 10 records and so on? Any suggestions / ideas ? Db2 Version - DB2 v8.1.0.144 Fix Pack Num - 16 Linux

    Read the article

  • Are SharePoint site templates really less performant than site definitions?

    - by Jim
    So, it seems in the SharePoint blogosphere that everybody just copies and pastes the same bullet points from other blogs. One bullet point I've seen is that SharePoint site templates are less performant than site definitions because site definitions are stored on the file system. Is that true? It seems odd that site templates would be less performant. It's my understanding that all site content lives in a database, whether you use a site template or a site definition. A site template is applied once to the database, and from then on the site should not care if the content was created using a site template or not. So, does anybody have an architectural reason why a site template would be less performant than a site definition? Edit: Links to the blogs that say there is a performance difference: From MSDN: Because it is slow to store templates in and retrieve them from the database, site templates can result in slower performance. From DevX: However, user templates in SharePoint can lead to performance problems and may not be the best approach if you're trying to create a set of reusable templates for an entire organization. From IT Footprint: Because it is slow to store templates in and retrieve them from the database, site templates can result in slower performance. Templates in the database are compiled and executed every time a page is rendered. From Branding SharePoint:Custom site definitions hold the following advantages over custom templates: Data is stored directly on the Web servers, so performance is typically better. At a minimum, I think the above articles are incomplete, and I think several are misleading based on what I know of SharePoints architecture. I read another blog post that argued against the performance differences, but I can't find the link.

    Read the article

  • Rails apps blew up on mediatemple's (dv) server

    - by BandsOnABudget
    i managed to fix this issue but i wanted to document it here for any others whom might have similar problems. I'm running a mediatemple (dv) rage server. monit started sending me alerts that i was having resource limitations on the server. logged into plesk and the CPU was pinned at 99.9%. Rebooted the server, catastrophe avoided... Not quite - all my rails apps were not loading My Setup ruby 1.8.6 Rails 2.3.5 w/ passenger installed as an apache module. I noticed a defunct ruby process so i killed and rebooted the server but runby continued to come back as defunct. Started trolling thru the apache log and i saw errors irt updating rubygems. i updated to the latest but then continued to get gem errors. Basically, I had to go thru all my apps and update any gems manually, reboot apache, and all was restored. Not really sure the cause of the issue but wanted to note it for posterity. Anybody else in the community ever have similar issues???

    Read the article

  • Static Data Structures on Embedded Devices (Android in particular)

    - by Mark
    I've started working on some Android applications and have a question regarding how people normally deal with situations where you have a static data set and have an application where that data is needed in memory as one of the standard java collections or as an array. In my current specific issue i have a spreadsheet with some pre-calculated data. It consists of ~100 rows and 3 columns. 1 Column is a string, 1 column is a float, 1 column is an integer. I need access to this data as an array in java. It seems like i could: 1) Encode in XML - This would be cpu intensive to decode in my experience. 2) build into SQLite database - seems like a lot of overhead for static access to data i only need array style access to in ram. 3) Build into binary blob and read in. (never done this in java, i miss void *) 4) Build a python script to take the CSV version of my data and spit out a java function that adds the values to my desired structure with hard coded values. 5) Store a string array via androids resource mechanism and compute the other 2 columns on application load. In my case the computation would require a lot of calls to Math.log, Math.pow and Math.floor which i'd rather not have to do for load time and battery usage reasons. I mostly work in low power embedded applications in C and as such #4 is what i'm used to doing in these situations. It just seems like it should be far easier to gain access to static data structures in java/android. Perhaps I'm just being too battery usage conscious and in my single case i imagine the answer is that it doesn't matter much, but if every application took that stance it could begin to matter. What approaches do people usually take in this situation? Anything I missed?

    Read the article

< Previous Page | 356 357 358 359 360 361 362 363 364 365 366 367  | Next Page >