Search Results

Search found 8219 results on 329 pages for 'less'.

Page 146/329 | < Previous Page | 142 143 144 145 146 147 148 149 150 151 152 153  | Next Page >

  • An alternative to multiple inheritance when creating an abstraction layer?

    - by sebf
    In my project I am creating an abstraction layer for some APIs. The purpose of the layer is to make multi-platform easier, and also to simplify the APIs to the feature set that I need while also providing some functionality, the implementation of which will be unique to each platform. At the moment, I have implemented it by defining and abstract class, which has methods which creates objects that implement interfaces. The abstract class and these interfaces define the capabilities of my abstraction layer. The implementation of these in my layer should of course be arbitrary from the POV view of my application, but I have done it, for my first API, by creating chains of subclasses which add more specific functionality as the features of the APIs they expose become less generic. An example would probably demonstrate this better: //The interface as seen by the application interface IGenericResource { byte[] GetSomeData(); } interface ISpecificResourceOne : IGenericResource { int SomePropertyOfResourceOne {get;} } interface ISpecificResourceTwo : IGenericResource { string SomePropertyOfResourceTwo {get;} } public abstract class MyLayer { ISpecificResourceOne CreateResourceOne(); ISpecificResourceTwo CreateResourceTwo(); void UseResourceOne(ISpecificResourceOne one); void UseResourceTwo(ISpecificResourceTwo two); } //The layer as created in my library public class LowLevelResource : IGenericResource { byte[] GetSomeData() {} } public class ResourceOne : LowLevelResource, ISpecificResourceOne { int SomePropertyOfResourceOne {get{}} } public class ResourceTwo : ResourceOne, ISpecificResourceTwo { string SomePropertyOfResourceTwo {get {}} } public partial class Implementation : MyLayer { override UseResourceOne(ISpecificResourceOne one) { DoStuff((ResourceOne)one); } } As can be seen, I am essentially trying to have two inheritance chains on the same object, but of course I can't do this so I simulate the second version with interfaces. The thing is though, I don't like using interfaces for this; it seems wrong, in my mind an interface defines a contract, any class that implements that interface should be able to be used where that interface is used but here that is clearly not the case because the interfaces are being used to allow an object from the layer to masquerade as something else, without the application needing to have access to its definition. What technique would allow me to define a comprehensive, intuitive collection of objects for an abstraction layer, while their implementation remains independent? (Language is C#)

    Read the article

  • How to reduce the fan noise and how to increase battery life in ubuntu 11.10/12.04?

    - by mehdi
    I have a brand new Sony Vaio S series laptop.(VPCSA2DGX) It came factory installed with Windows 7 professional Edition 64bit. Runs Intel core i5, 500 GB HDD , 4GB Ram. First I installed ubuntu 11.10 64 bit along side Windows to dual boot. Later,since the problem did not solve, I installed ubuntu 12.04 64bit along side Windows to dual boot. However the problem keeps annoying me. Problem: When running ubuntu 11.10/12.04, the battery lasts only about 1.5 hours. The Fan runs loud and continuously. And there is a lot of heat generated. System monitor shows less than 5% CPU used. My laptop enjoys hybrid graphic and I tried turning off ADM graphic card and keep Intel graphic card on. However I can not get the Fan noise or heat to go away and consequently the battery drain continues. BTW, in windows, the laptop gives 4-5 hours of battery power, Fan is silent and there is no heat problem. Any ideas on how to reduce the fan noise and how to increase battery life in ubuntu 11.10/12.04?

    Read the article

  • Teaching OO to VBA developers [closed]

    - by Eugene
    I work with several developers that come from less object oriented background like (VB6, VBA) and are mostly self-taught. As part of moving away from those technologies we recently we started having weekly workshops to go over the features of C#.NET and OO practices and design principles. After a couple of weeks of basic introduction I noticed that they had a lot of problems implementing even basic code. For instance it took probably 15 minutes to implement a Stack.push() and a full hour to implement a simple Stack fully. These developers were trying to do things like passing top index as a parameter to the method, not creating an private array, using variables out of scope. But most of all not going through the "design (dia/mono)log" (I need something to do X, so maybe I'll make an array, or put it here). I am a little confused because they are smart people and are able to produce functional code in their traditional environments. I'm curious if anybody else has encountered a similar thing and if there are any particular resources, exercises, books, ideas that would be helpful in this circumstance.

    Read the article

  • Initializing entities vs having a constructor parameter

    - by Vee
    I'm working on a turn-based tile-based puzzle game, and to create new entities, I use this code: Field.CreateEntity(10, 5, Factory.Player()); This creates a new Player at [10; 5]. I'm using a factory-like class to create entities via composition. This is what the CreateEntity method looks like: public void CreateEntity(int mX, int mY, Entity mEntity) { mEntity.Field = this; TileManager.AddEntity(mEntity, true); GetTile(mX, mY).AddEntity(mEntity); mEntity.Initialize(); InvokeOnEntityCreated(mEntity); } Since many of the components (and also logic) of the entities require to know what the tile they're in is, or what the field they belong to is, I need to have mEntity.Initialize(); to know when the entity knows its own field and tile. The Initialize(); method contains a call to an event handler, so that I can do stuff like this in the factory class: result.OnInitialize += () => result.AddTags(TDLibConstants.GroundWalkableTag, TDLibConstants.TrapdoorTag); result.OnInitialize += () => result.AddComponents(new RenderComponent(), new ElementComponent(), new DirectionComponent()); This works so far, but it is not elegant and it's very open to bugs. I'm also using the same idea with components: they have a parameterless constructor, and when you call the AddComponent(mComponent); method in an entity, it is the entity's job to set the component's entity to itself. The alternative would be having a Field, int, int parameters in the factory class, to do stuff like: new Entity(Field, 10, 5); But I also don't like the fact that I have to create new entities like this. I would prefer creating entities via the Field object itself. How can I make entity/component creation more elegant and less prone to bugs?

    Read the article

  • Building a Redundant / Distributed Application

    - by MattW
    This is more of a "point me in the right direction" question. My team of three and I have built a hosted web app that queues and routes customer chat requests to available customer service agents (It does other things as well, but this is enough background to illustrate the issue). The basic dev architecture today is: a single page ajax web UI (ASP.NET MVC) with floating chat windows (think Gmail) a backend Windows service to queue and route the chat requests this service also logs the chats, calculates service levels, etc a Comet server product that routes data between the web frontend and the backend Windows service this also helps us detect which Agents are still connected (online) And our hardware architecture today is: 2 servers to host the web UI portion of the application a load balancer to route requests to the 2 different web app servers a third server to host the SQL Server DB and the backend Windows service responsible for queuing / delivering chats So as it stands today, one of the web app servers could go down and we would be ok. However, if something would happen to the SQL Server / Windows Service server we would be boned. My question - how can I make this backend Windows service logic be able to be spread across multiple machines (distributed)? The Windows service is written to accept requests from the Comet server, check for available Agents, and route the chat to those agents. How can I make this more distributed? How can I make it so that I can distribute the work of the backend Windows service can be spread across multiple machines for redundancy and uptime purposes? Will I need to re-write it with distributed computing in mind? I should also note that I am hosting all of this on Rackspace Cloud instances - so maybe it is something I should be less concerned about? Thanks in advance for any help!

    Read the article

  • New features in SQL Prompt 6.4

    - by Tom Crossman
    We’re pleased to announce a new beta version of SQL Prompt. We’ve been trying out a few new core technologies, and used them to add features and bug fixes suggested by users on the SQL Prompt forum and suggestions forum. You can download the SQL Prompt 6.4 beta here (zip file). Let us know what you think! New features Execute current statement In a query window, you can now execute the SQL statement under your cursor by pressing Shift + F5. For example, if you have a query containing two statements and your cursor is placed on the second statement: When you press Shift + F5, only the second statement is executed:   Insert semicolons You can now use SQL Prompt to automatically insert missing semicolons after each statement in a query. To insert semicolons, go to the SQL Prompt menu and click Insert Semicolons. Alternatively, hold Ctrl and press B then C. BEGIN…END block highlighting When you place your cursor over a BEGIN or END keyword, SQL Prompt now automatically highlights the matching keyword: Rename variables and aliases You can now use SQL Prompt to rename all occurrences of a variable or alias in a query. To rename a variable or alias, place your cursor over an instance of the variable or alias you want to rename and press F2: Improved loading dialog box The database loading dialog box now shows actual progress, and you can cancel loading databases:   Single suggestion improvement SQL Prompt no longer suggests keywords if the keyword has been typed and no other suggestions exist. Performance improvement SQL Prompt now has less impact on Management Studio start up time. What do you think? We want to hear your feedback about the beta. If you have any suggestions, or bugs to report, tell us on the SQL Prompt forum or our suggestions forum.

    Read the article

  • Isometric Camera trouble - can't rotate or move correctly

    - by Deukalion
    I'm trying to create a 3D editor, but I've been having some trouble with the Camera and understanding each component. I've created 2 camera that works OK, but now I'm trying to implement an Isometric Camera in XNA without success on the rotation and movement of the camera. All I get working is Zoom. (Cube with x=3f, y=3f, z=1f in center) And this is the constructor for my IsometricCamera (inherits from ICamera, with methods for Rotation, Movement and Zoom, and Properties for World/View/Projection matrices) public IsometricCamera3D(GraphicsDevice device, float startClip = -1000f, float endClip = 1000f) { matrix_projection = Matrix.CreateOrthographic(device.Viewport.Width, device.Viewport.Height, startClip, endClip); rotation = Vector3.Zero; matrix_view = Matrix.CreateScale(zoom) * Matrix.CreateRotationY(MathHelper.ToRadians(45 + 180)) * Matrix.CreateRotationX(MathHelper.ToRadians(30)) * Matrix.CreateRotationZ(MathHelper.ToRadians(120)) * Matrix.CreateTranslation(rotation.X, rotation.Y, rotation.Z); } Problem is when I rotate it, all that happens is that the Cube gets more or less shiny and nothing happens. What is wrong and how should I create my View matrix to move it / rotate it correctly? Rotate, Move and Zoom looks like: MethodName(Vector3 rotation/movement), Zoom(float value); and just increases the value, then calls an update to recreate the View Matrix according to the code in the constructor. Currently, in my editor I use MiddleButton + Mouse Movement to rotate the camera, but it's not working as the other camera. But in my default camera I use World Matrix to move, but I guess that's not the best way to go which is why I'm trying this.

    Read the article

  • Compressing 2D level data

    - by Lucius
    So, I'm developing a 2D, tile based game and a map maker thingy - all in Java. The problem is that recently I've been having some memory issues when about 4 maps are loaded. Each one of these maps are composed of 128x128 tiles and have 4 layers (for details and stuff). I already spent a good amount of time searching for solutions and the best thing I found was run-length enconding (RLE). It seems easy enough to use with static data, but is there a way to use it with data that is constantly changing, without a big drop in performance? In my maps, supposing I'm compressing the columns, I would have 128 rows, each with some amount of data (hopefully less than it would be without RLE). Whenever I change a tile, that whole row would have to be checked and I'm affraid that would slow down too much the production (and I'm in a somewhat tight schedule). Well, worst case scenario I work on each map individually, and save them using RLE, but it would be really nice if I could avoind that. EDIT: What I'm currently using to store the data for the tiles is a 2D array of HashMaps that use the layer as key and store the id of the tile in that position - like this: private HashMap< Integer, Integer [][]

    Read the article

  • Weekly Cloud Roundup 2012-15

    - by Alan Smith
    Filtering the informative, insightful and quirky from the fire hose of cloud-based hype. Irving Wladawsky-Berger provides some great insight into The Complex Transition to the Cloud, sharing his views on the slow adoption of cloud computing in organizations. “…a prediction by the research firm Gartner that while cloud computing will continue to grow at almost 20 percent a year, it will account for less than 5 percent of totally IT spending in 2015.” With a more positive mindset, Balaji Viswanathan highlights 7 Salient Trends and Directions in Cloud Computing that could be shaping the industry over the next few years. Cloud computing also looks to save energy “A small business with 100 users that moved the Microsoft applications to the cloud could cut energy use and carbon emissions by 90%. Large organizations with 10,000 users saw a 30% reduction.” More on that story here. The expansion of Windows Azure has been in the news with the announcement of “East US” and “West US” datacenters; this was covered by Visual Studio Magazine and Mary-Jo, and according to thenextweb.com Microsoft are also building $112 million data center in Wyoming. The cloud price war is still in full swing with Joe Panettieri discussing the pricing of Windows Azure and Office 365 and asking How Low Can It Go?

    Read the article

  • Problems with Intel Video Resolution on Acer Laptop Wide Display

    - by ricstr
    I have an ACER Aspire 5332 laptop which I have just installed Ubuntu 12.04 x64, which is causing some issues with the video display on boot and video resolution. First and foremost, it will only boot past the purple screen if GRUB has been edited to replace 'quick splash' with 'nomodeset'. Secondly, once it has booted with the the 'nomodeset' option, it does not allow me to change the resolution higher or lower from 1024 x 786. Is it OK to use the 'nomodeset' for normal use? Will this compromise performance of other devices? The video card is an on-board one, integrated within the Intel GL40 chip-set. The display is a wide-screen LCD, and under Windows could operate under various resolutions. Ideally I would like it to operate on a resolution to fit the wide-screen display as it a bit stretched out at the moment, and less desktop space as I am used to. I believe the optimal resolution is 1366 x 768. Below is some information from the terminal which may be useful. ricstr@Aspire-5332:~$ lspci | grep -i VGA 00:02.0 VGA compatible controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 09) ricstr@Aspire-5332:~$ xrandr xrandr: Failed to get size of gamma for output default Screen 0: minimum 1024 x 768, current 1024 x 768, maximum 1024 x 768 default connected 1024x768+0+0 0mm x 0mm 1024x768 0.0*

    Read the article

  • Masters vs. PhD - long [closed]

    - by Sterling
    I'm 21 years old and a first year master's computer science student. Whether or not to continue with my PhD has been plaguing me for the past few months. I can't stop thinking about it and am extremely torn on the issue. I have read http://www.cs.unc.edu/~azuma/hitch4.html and many, many other masters vs phd articles on the web. Unfortunately, I have not yet come to a conclusion. I was hoping that I could post my ideas about the issue on here in hopes to 1) get some extra insight on the issue and 2) make sure that I am correct in my assumptions. Hopefully having people who have experience in the respective fields can tell me if I am wrong so I don't make my decision based on false ideas. Okay, to get this topic out of the way - money. Money isn't the most important thing to me, but it is still important. It's always been a goal of mine to make 6 figures, but I realize that will probably take me a long time with either path. According to most online salary calculating sites, the average starting salary for a software engineer is ~60-70k. The PhD program here is 5 years, so that's about 300k I am missing out on by not going into the workforce with a masters. I have only ever had ~1k at one time in my life so 300k is something I can't even really accurately imagine. I know that I wouldn't have at once obviously, but just to know I would be earning that is kinda crazy to me. I feel like I would be living quite comfortably by the time I'm 30 years old (but risk being too content too soon). I would definitely love to have at least a few years of my 20s to spend with that kind of money before I have a family to spend it all on. I haven't grown up very financially stable so it would be so nice to just spend some money…get a nice car, buy a new guitar or two, eat some good food, and just be financially comfortable. I have always felt like I deserved to make good money in my life, even as a kid growing up, and I just want to have it be a reality. I know that either path I take will make good money by the time I'm ~40-45 years old, but I guess I'm just sick of not making money and am getting impatient about it. However, a big idea pushing me towards a PhD is that I feel the masters path would give me a feeling of selling out if I have the capability to solve real questions in the computer science world. (pretty straight-forward - not much to elaborate on, but this is a big deal) Now onto other aspects of the decision. I originally got into computer science because of programming. I started in high school and knew very soon that it was what I wanted to do for a career. I feel like getting a masters and being a software engineer in the industry gives me much more time to program in my career. In research, I feel like I would spend more time reading, writing, trying to get grant money, etc than I would coding. A guy I work with in the lab just recently published a paper. He showed it to me and I was shocked by it. The first two pages was littered with equations and formulas. Then the next page or so was followed by more equations and formulas that he derived from the previous ones. That was his work - breaking down and creating all of these formulas for robotic arm movement. And whenever I read computer science papers, they all seem to follow this pattern. I always pictured myself coding all day long…not proving equations and things of that nature. I know that's only one part of computer science research, but that part bores me. A couple cons on each side - Phd - I don't really enjoy writing or feel like I'm that great at technical writing. Whenever I'm in groups to make something, I'm always the one who does the large majority of the work and then give it to my team members to write up a report. Presenting is different though - I don't mind presenting at all as long as I have a good grasp on what I am presenting. But writing papers seems like such a chore to me. And because of this, the "publish or perish" phrase really turns me off from research. Another bad thing - I feel like if I am doing research, most of it would be done alone. I work best in small groups. I like to have at least one person to bounce ideas off of when I am brainstorming. The idea of being a part of some small elite group to build things sounds ideal to me. So being able to work in small groups for the majority of my career is a definite plus. I don't feel like I can get this doing research. Masters - I read a lot online that most people come in as engineers and eventually move into management positions. As of now, I don't see myself wanting to be a part of management. Lets say my company wanted to make some new product or system - I would get much more pride, enjoyment, and overall satisfaction to say "I made this" rather than "I managed a group of people that made this." I want to be a big part of the development process. I want to make things. I think it would be great to be more specialized than other people. I would rather know everything about something than something about everything. I always have been that way - was a great pitcher during my baseball years, but not so good at everything else, great at certain classes in school, but not so good at others, etc. To think that my career would be the same way sounds okay to me. Getting a PhD would point me in this direction. It would be great to be some guy who is someone that people look towards and come to ask for help because of being such an important contributor to a very specific field, such as artificial neural networks or robotic haptic perception. From what I gather about the software industry, being specialized can be a very bad thing because of the speed of the new technology. I When it comes to being employed, I have pretty conservative views. I don't want to change companies every 5 years. Maybe this is something everyone wishes, but I would love to just be an important person in one company for 10+ (maybe 20-25+ if I'm lucky!) years if the working conditions were acceptable. I feel like that is more possible as a PhD though, being a professor or researcher. The more I read about people in the software industry, the more it seems like most software engineers bounce from company to company at rapid paces. Some even work like a hired gun from project to project which is NOT what I want AT ALL. But finding a place to make great and important software would be great if that actually happens in the real world. I'm a very competitive person. I thrive on competition. I don't really know why, but I have always been that way even as a kid growing up. Competition always gave me a reason to practice that little extra every night, always push my limits, etc. It seems to me like there is no competition in the research world. It seems like everyone is very relaxed as long as research is being conducted. The only competition is if someone is researching the same thing as you and its whoever can finish and publish first (but everyone seems to careful to check that circumstance). The only noticeable competition to me is just with yourself and your own discipline. I like the idea that in the industry, there is real competition between companies to put out the best product or be put out of business. I feel like this would constantly be pushing me to be better at what I do. One thing that is really pushing me towards a PhD is the lifetime of the things you make. I feel like if you make something truly innovative in the industry…just some really great new application or system…there is a shelf-life of about 5-10 years before someone just does it faster and more efficiently. But with research work, you could create an idea or algorithm that last decades. For instance, the A* search algorithm was described in 1968 and is still widely used today. That is amazing to me. In the words of Palahniuk, "The goal isn't to live forever, its to create something that will." Over anything, I just want to do something that matters. I want my work to help and progress society. Seriously, if I'm stuck programming GUIs for the next 40 years…I might shoot myself in the face. But then again, I hate the idea that less than 1% of the population will come into contact with my work and even less understand its importance. So if anything I have said is false then please inform me. If you think I come off as a masters or PhD, inform me. If you want to give me some extra insight or add on to any point I made, please do. Thank you so much to anyone for any help.

    Read the article

  • What's the best practice to do SOA exception handling?

    - by sun1991
    Here's some interesting debate going on between me and my colleague when coming to handle SOA exceptions: On one side, I support what Juval Lowy said in Programming WCF Services 3rd Edition: As stated at the beginning of this chapter, it is a common illusion that clients care about errors or have anything meaningful to do when they occur. Any attempt to bake such capabilities into the client creates an inordinate degree of coupling between the client and the object, raising serious design questions. How could the client possibly know more about the error than the service, unless it is tightly coupled to it? What if the error originated several layers below the service—should the client be coupled to those lowlevel layers? Should the client try the call again? How often and how frequently? Should the client inform the user of the error? Is there a user? By having all service exceptions be indistinguishable from one another, WCF decouples the client from the service. The less the client knows about what happened on the service side, the more decoupled the interaction will be. On the other side, here's what my colleague suggest: I believe it’s simply incorrect, as it does not align with best practices in building a service oriented architecture and it ignores the general idea that there are problems that users are able to recover from, such as not keying a value correctly. If we considered only systems exceptions, perhaps this idea holds, but systems exceptions are only part of the exception domain. User recoverable exceptions are the other part of the domain and are likely to happen on a regular basis. I believe the correct way to build a service oriented architecture is to map user recoverable situations to checked exceptions, then to marshall each checked exception back to the client as a unique exception that client application programmers are able to handle appropriately. Marshall all runtime exceptions back to the client as a system exception, along with the stack trace so that it is easy to troubleshoot the root cause. I'd like to know what you think about this? Thank you.

    Read the article

  • PHP Battle System for RPG game

    - by Jay
    I posted this a while ago on stackoverflow, they thought it would be better place here, I agree. Essentially I know what I want to accomplish, and I have something to the effect of what I want but I am not satisfied with it. Here's the problem. Each user has some states: STR (how hard they hit), DEF (dodging/blocking attacks), SPD (when they can strike), and STAMINA (basically their endurance in game, if this runs out they can no longer fight and lose) What I need is something like this: UserA Stats: STR: 1,000 DEF: 2500 SPD: 2000 (HP: 1000/1000) UserB Stats: STR: 1,500 DEF: 500 SPD: 4000 (HP: 1000/1000) Because the second user has double the speed, he lands twice the amount of hits on the first user, before he gets hit. Because he has less strength than the first users defence, he will do no, to little damage. This is how the battle would theoretically go: UserB strikes UserA for 0 damage UserB strikes UserA for 0 damage UserA strikes UserB for 500 damage UserB strikes UserA for 0 damage UserB strikes UserA for 0 damage UserA strikes UserB for 500 damage, and sends him to the hospital! I was using this code, which is buggy, and not efficient, I just need a better way to do this: http://pastebin.com/15LiQQuJ Oh, and if anyone has some good ideas on how to improve the concept that would be cool too! It's not that elaborate so I'll be thinking of all sorts of things to make it more dynamic. Thanks.

    Read the article

  • Should a project start with the client or the server?

    - by MadBurn
    Pretty simple question with a complex answer. Should a project start with the client or the server, and why? Where should a single programmer start a client/server project? What are the best practices and what are the reasons behind them? If you can't think of any, what reasons do you use to justify why you would choose to start one before the other? Personally, I'm asking this question because I'm finishing up specs for a project I will be doing for myself on the side for fun. But now that I'm finishing this phase, I'm wondering "ok, now where do I begin?" Since I've never done a project like this by myself, I'm not sure where I should start. In this project, my server will be doing all the heavy lifting and the client will just be sending updates, getting information from the server, and displaying it. But, I don't want that to sway the answer as I'm looking for more of an in depth and less specific answer that would apply to any project I begin in the future.

    Read the article

  • How to test the tests?

    - by Ryszard Szopa
    We test our code to make it more correct (actually, less likely to be incorrect). However, the tests are also code -- they can also contain errors. And if your tests are buggy, they hardly make your code better. I can think of three possible types of errors in tests: Logical errors, when the programmer misunderstood the task at hand, and the tests do what he thought they should do, which is wrong; Errors in the underlying testing framework (eg. a leaky mocking abstraction); Bugs in the tests: the test is doing slightly different than what the programmer thinks it is. Type (1) errors seem to be impossible to prevent (unless the programmer just... gets smarter). However, (2) and (3) may be tractable. How do you deal with these types of errors? Do you have any special strategies to avoid them? For example, do you write some special "empty" tests, that only check the test author's presuppositions? Also, how do you approach debugging a broken test case?

    Read the article

  • Apache DVB http video Streaming bandwidth or priority problem

    - by igino manfre'
    I'm streaming few precompressed DVB videos from cloud. The streams are generated from VLC on "impossible" ports (such as 64085, 64086 etc) reverse proxed by Apache on port 80 and 8080. All the generated streams are listed in "http://95.110.164.61/indexv.html". From an ADSL connection with enough downlink bandwidth, recalling the stream generated by VLC (such as "http://95.110.164.61:64087/mpg2_6.4") it flows fluently. Recalling the same stream proxed by Apache ("http://95.110.164.61/mpg2_6.4") the stream stops and goes. The only situation in which the Apache proxed streams flow regularly is from a site connected through 64 Mbps warranted bandwith with RTT to the server less than 10 mseconds. Please note that streams below 2 Mbps are fluently proxed. The system is a single core xeon with windows 2008 R2 on 4 GB of RAM with 1 Gbps of network bandwidth. The drain of computational and bandwidth resources is negligeable, the RAM usage always lower than 50%. On the system I run many VLC streamers. Any of them drains a variable amount of RAM (from about 25 to 70 MB). On the contrary the couple of httpd.exe processes drain no more than 7 MB. Using Wireshark (on the server) I see that VLC directy send to the client much more packets than Apache, and the stream is framgmented on many frames. I'm not a programmer, a newby of Apache. Can anyone please address me to a specific portion of the Apache's huge documentation? Thank you. igino

    Read the article

  • How to keep background requests in sequence

    - by Jason Lewis
    I'm faced with implementing interfaces for some rather archaic systems, for handling online deposits to stored value accounts (think campus card accounts for students). Here's my dilemma: stage 1 of the process involves passing the user off to a thrid-party site for the credit card transaction, like old-school PayPal. Step two involves using a proprietary protocol for communicating with a legacy system for conducting the actual deposit. Step two requires that each transaction have a unique sequence number, and that the requests' seqnums are in order. Since we're logging each transaction in Postgres, my first thought was to take a number from a sequence in the DB, guaranteeing uniqueness. But since we're dealing with web requests that might come in near-simultaneously, and since latency with the return from the off-ste payment processor is beyond our control, there's always the chance for a race condition in the order of requests passed back to the proprietary system, and if the seqnums are out of order, the request fails silently (brilliant, right?). I thought about enqueuing the requests in Redis and using Resque workers to process them (single worker, single process, so they are processed in order), but we need to be able to give the user feedback as to whether the transaction was processed successfully, so this seems less feasible to me. I've tried to make this application handle concurrency well (as much as possible for a Ruby on Rails app), but now we're in a situation where we have to interact with a system that is designed to be single process, single threaded, and sequential. If it at least gave an "out of order" error, I could just increment (or take the next value off the sequence), but it's designed to fail silently in the event of ANY error. We are handling timeouts in a way that blocks on I/O, but since the application uses multiple workers (Unicorn), that's no guarantee. Any ideas/suggestions would be appreciated.

    Read the article

  • Need to re-build an application - how?

    - by Tom
    For our main system, we have a small monitor application that sits outside our network and periodically tries to log in to verify the system still works. We have a problem with the monitor though in that the communications component set (Asta 3 inside Delphi applications) doesn't always connect through. Overall, I'd say it's about 95% reliable, but that other 5% kills the monitor since it will try to log in and hang on the connection attempt (no timeout in the component). This really isn't an issue on the client side of the system since the clients don't disconnect and reconnect repeatedly on the same application instance, but I need a way to make sure the monitor stays up and continues working even when the component fails on a run. I have a few ideas as to which way to have the program run, the main idea being to put the communications inside a threaded data module so that if one thread crashes then another thread can test later and the program keep going. Does this sound like a valid way to go? Any other ideas how to ensure a reliable monitoring application with a less than 100% reliable component? Thanks. P.S. Not sure these tags are the most appropriate. Tried including "system-reliability" as one, but not high enough rep to create.

    Read the article

  • Ubuntu 12.04 Precise on Dell Inspirion Duo

    - by Roman M. Kos
    I installed 12.04 version of Ubuntu on my Dell Inspiron Duo. After installing with help of program like UNetBootIn or smth like that. ( Besides i have no problem with kernel panic on chargin on/off like in 11.10. ) After that i followed with steps from here, in the first post here: http://ubuntuforums.org/showthread.php?t=1658635 There left one big problem with touchscreen: When i whant to drag with touchscreen like i`m doing it with mouse (for ex. selecting multile files with mouse) the selectable rectangular doesnot shows while im dragging, when dragging was finished (i put my finger off) it shows the rectangular and hides it. This thing disables all my tries to drag a window or smth else.... Also some time after using touchscreen such things are disabeling: - Often - click from a mouse (after keyboard using functinality restores) - less often - mouse movement disables (sometimes restores sometimes not) - lesser than other - keyboards works but no sygnals accepting (keyoards has indicator, so thay react, mouse of course not) The test from eTouchU utility passes perfeclty. Any idia for solving this problem? P.S.: Im from Ukraine, so sorry if my possible grammar mistakes. P.P.S.: Besides how to know the physicall position of my tabled mode? For automaticall rotating. Like on each rotation do some script.

    Read the article

  • Bundling in visual studio 2012 for web optimization

    - by Jalpesh P. Vadgama
    I have been writing a series of posts about Visual Studio 2012 features. This series describes what are the new features in the Visual Studio 2012. This post will also be part of Visual Studio 2012 feature series. As we know now days web applications or site are providing more and more features and due to that we have include lots of JavaScript and CSS files in our web application.So once we load site then we will have all the JavaScript  js files and CSS files loaded in the browsers and If you have lots of JavaScript files then its consumes lots of time when browser request them. Following images show the same situation over there.   Here you can see total 25 files loaded into the system and it's almost more than 1MB of total size. As we need to have our web application of site very responsive and need to have high performance application/site, this will be a performance bottleneck to our site. In situation like this, the bundling feature of Visual Studio 2012 and ASP.NET 4.5 comes very handy. With the help of this feature we do optimization there and we can increase performance of our application. To enable this feature in Visual Studio 2012 we just made debug=”false” in web.config of our application like following. Now once you enable this feature and run this application in the browser to see your traffic it will have less items like following. As you can see in the above image there are only 8 items. So after enabling bundling it will automatically convert all js and css files into the one request. Isn’t that cool feature? This feature will surely going to have great impact on performance. Hope you like it. Stay tuned for more.. Till then happy programming!!

    Read the article

  • Lexmark E240 printing issues

    - by NoamH
    I have Lexmark E240 laser printer. I have been using it with 12.04 (32bit) for 2 years with no significant issues. Since lexmark does not support this printer on linux, I used alternatives drivers that were suggested by the community, such as HP-lasterjet, E238, generic PS, etc. They all worked fine, more or less. After upgrading to 14.04 (64bit fresh install) I tried to configure the printer as before, but now I have problems. The test page is ok, but when printing, most of the times, the first page in the document will be printed very partially and in 300% zoom. Next page might be ok. If I turn off the printer and back on, the first page might be ok, but in the next print job, it will be broken again. I used all the above printer options. Same results. I did NOT install the lexmark drivers since they are intended for 12.04 and the package manager report that it is in "bad quality" (don't know why). Does anyone has any experience with this printer in 14.04 64bit ?

    Read the article

  • How to Generate a Create Table DDL Script Along With Its Related Tables

    - by Compudicted
    Have you ever wondered when creating table diagrams in SQL Server Management Studio (SSMS) how slickly you can add related tables to it by just right-clicking on the interesting table name? Have you also ever needed to script those related tables including the master one? And you discovered you have dozens of related tables? Or may be no SSMS at your disposal? That was me one day. Well, creativity to the rescue! I Binged and Googled around until I found more or less what I wanted, but it was all involving T-SQL, yeah, a long and convoluted CROSS APPLYs, then I saw a PowerShell solution that I quickly adopted to my needs (I am not referencing any particular author because it was a mashup): 1: ########################################################################################################### 2: # Created by: Arthur Zubarev on Oct 14, 2012 # 3: # Synopsys: Generate file containing the root table CREATE (DDL) script along with all its related tables # 4: ########################################################################################################### 5:   6: [System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') | out-null 7:   8: $RootTableName = "TableName" # The table name, no schema name needed 9:   10: $srv = new-Object Microsoft.SqlServer.Management.Smo.Server("TargetSQLServerName") 11: $conContext = $srv.ConnectionContext 12: $conContext.LoginSecure = $True 13: # In case the integrated security is not used uncomment below 14: #$conContext.Login = "sa" 15: #$conContext.Password = "sapassword" 16: $db = New-Object Microsoft.SqlServer.Management.Smo.Database 17: $db = $srv.Databases.Item("TargetDatabase") 18:   19: $scrp = New-Object Microsoft.SqlServer.Management.Smo.Scripter($srv) 20: $scrp.Options.NoFileGroup = $True 21: $scrp.Options.AppendToFile = $False 22: $scrp.Options.ClusteredIndexes = $False 23: $scrp.Options.DriAll = $False 24: $scrp.Options.ScriptDrops = $False 25: $scrp.Options.IncludeHeaders = $True 26: $scrp.Options.ToFileOnly = $True 27: $scrp.Options.Indexes = $False 28: $scrp.Options.WithDependencies = $True 29: $scrp.Options.FileName = 'C:\TEMP\TargetFileName.SQL' 30:   31: $smoObjects = New-Object Microsoft.SqlServer.Management.Smo.UrnCollection 32: Foreach ($tb in $db.Tables) 33: { 34: Write-Host -foregroundcolor yellow "Table name being processed" $tb.Name 35: 36: If ($tb.IsSystemObject -eq $FALSE -and $tb.Name -eq $RootTableName) # feel free to customize the selection condition 37: { 38: Write-Host -foregroundcolor magenta $tb.Name "table and its related tables added to be scripted." 39: $smoObjects.Add($tb.Urn) 40: } 41: } 42:   43: # The actual act of scripting 44: $sc = $scrp.Script($smoObjects) 45:   46: Write-host -foregroundcolor green $RootTableName "and its related tables have been scripted to the target file." Enjoy!

    Read the article

  • Any good reason open files in text mode?

    - by Tinctorius
    (Almost-)POSIX-compliant operating systems and Windows are known to distinguish between 'binary mode' and 'text mode' file I/O. While the former mode doesn't transform any data between the actual file or stream and the application, the latter 'translates' the contents to some standard format in a platform-specific manner: line endings are transparently translated to '\n' in C, and some platforms (CP/M, DOS and Windows) cut off a file when a byte with value 0x1A is found. These transformations seem a little useless to me. People share files between computers with different operating systems. Text mode would cause some data to be handled differently across some platforms, so when this matters, one would probably use binary mode instead. As an example: while Windows uses the sequence CR LF to end a line in text mode, UNIX text mode will not treat CR as part of the line ending sequence. Applications would have to filter that noise themselves. Older Mac versions only use CR in text mode as line endings, so neither UNIX nor Windows would understand its files. If this matters, a portable application would probably implement the parsing by itself instead of using text mode. Implementing newline interpretation in the parser might also remove some overhead of using text mode, as buffers would need to be rewritten (and possibly resized) before returning to the application, while this may be less efficient than when it would happen in the application instead. So, my question is: is there any good reason to still rely on the host OS to translate line endings and file truncation?

    Read the article

  • Information about how much time in spent in a function, based on the input of this function

    - by olchauvin
    Is there a (quantitative) tool to measure performance of functions based on its input? So far, the tools I used to measure performance of my code, tells me how much time I spent in functions (like Jetbrain Dottrace for .Net), but I'd like to have more information about the parameters passed to the function in order to know which parameters impact the most the performance. Let's say that I have function like that: int myFunction(int myParam1, int myParam 2) { // Do and return something based on the value of myParam1 and myParam2. // The code is likely to use if, for, while, switch, etc.... } If would like a tool that would allow me to tell me how much time is spent in myFunction based on the value of myParam1 and myParam2. For example, the tool would give me a result looking like this: For "myFunction" : value | value | Number of | Average myParam1 | myParam2 | call | time ---------|----------|-----------|-------- 1 | 5 | 500 | 301 ms 2 | 5 | 250 | 1253 ms 3 | 7 | 1268 | 538 ms ... That would mean that myFunction has been call 500 times with myParam1=1 and myParam2=5, and that with those parameters, it took on average 301ms to return a value. The idea behind that is to do some statistical optimization by organizing my code such that, the blocs of codes that are the most likely to be executed are tested before the one that are less likely to be executed. To put it bluntly, if I know which values are used the most, I can reorganize the if/while/for etc.. structure of the function (and the whole program) to optimize it. I'd like to find such tools for C++, Java or.Net. Note: I am not looking for technical tips to optimize the code (like passing parameters as const, inlining functions, initializing the capacity of vectors and the like).

    Read the article

  • The overlooked OUTPUT clause

    - by steveh99999
    I often find myself applying ad-hoc data updates to production systems – usually running scripts written by other people. One of my favourite features of SQL syntax is the OUTPUT clause – I find this is rarely used, and I often wonder if this is due to a lack of awareness of this feature.. The OUTPUT clause was added to SQL Server in the SQL 2005 release – so has been around for quite a while now, yet I often see scripts like this… SELECT somevalue FROM sometable WHERE keyval = XXX UPDATE sometable SET somevalue = newvalue WHERE keyval = XXX -- now check the update has worked… SELECT somevalue FROM sometable WHERE keyval = XXX This can be rewritten to achieve the same end-result using the OUTPUT clause. UPDATE sometable SET somevalue = newvalue OUTPUT deleted.somevalue AS ‘old value’,              inserted.somevalue AS ‘new value’ WHERE keyval = XXX The Update statement with output clause also requires less IO - ie I've replaced three SQL Statements with one, using only a third of the IO.  If you are not aware of the power of the output clause – I recommend you look at the output clause in books online And finally here’s an example of the output produced using the Northwind database…  

    Read the article

< Previous Page | 142 143 144 145 146 147 148 149 150 151 152 153  | Next Page >