Search Results

Search found 69140 results on 2766 pages for 'design time'.

Page 275/2766 | < Previous Page | 271 272 273 274 275 276 277 278 279 280 281 282  | Next Page >

  • Monitoring UDP socket in glib(mm) eats up CPU time

    - by Gyorgy Szekely
    Hi, I have a GTKmm Windows application (built with MinGW) that receives UDP packets (no sending). The socket is native winsock and I use glibmm IOChannel to connect it to the application main loop. The socket is read with recvfrom. My problem is: this setup eats 25% percent CPU time on a 3GHz workstation. Can somebody tell me why? The application is idle in this case, and if I remove the UDP code, CPU usage drops down to almost zero. As the application has to perform some CPU intensive tasks, I could image better ways to spend that 25% Here are some code excerpts: (sorry for the printf's ;) ) /* bind */ void UDPInterface::bindToPort(unsigned short port) { struct sockaddr_in target; WSADATA wsaData; target.sin_family = AF_INET; target.sin_port = htons(port); target.sin_addr.s_addr = 0; if ( WSAStartup ( 0x0202, &wsaData ) ) { printf("WSAStartup failed!\n"); exit(0); // :) WSACleanup(); } sock = socket( AF_INET, SOCK_DGRAM, 0 ); if (sock == INVALID_SOCKET) { printf("invalid socket!\n"); exit(0); } if (bind(sock,(struct sockaddr*) &target, sizeof(struct sockaddr_in) ) == SOCKET_ERROR) { printf("failed to bind to port!\n"); exit(0); } printf("[UDPInterface::bindToPort] listening on port %i\n", port); } /* read */ bool UDPInterface::UDPEvent(Glib::IOCondition io_condition) { recvfrom(sock, (char*)buf, BUF_SIZE*4, 0, NULL, NULL); /* process packet... */ } /* glibmm connect */ Glib::RefPtr channel = Glib::IOChannel::create_from_win32_socket(udp.sock); Glib::signal_io().connect( sigc::mem_fun(udp, &UDPInterface::UDPEvent), channel, Glib::IO_IN ); I've read here in some other question, and also in glib docs (g_io_channel_win32_new_socket()) that the socket is put into nonblocking mode, and it's "a side-effect of the implementation and unavoidable". Does this explain the CPU effect, it's not clear to me? Whether or not I use glib to access the socket or call recvfrom() directly doesn't seem to make much difference, since CPU is used up before any packet arrives and the read handler gets invoked. Also glibmm docs state that it's ok to call recvfrom() even if the socket is polled (Glib::IOChannel::create_from_win32_socket()) I've tried compiling the program with -pg and created a per function cpu usage report with gprof. This wasn't usefull because the time is not spent in my program, but in some external glib/glibmm dll.

    Read the article

  • Pass parameter one time, but use more times

    - by Gabriel L. Oliveira
    I'm trying to do this: commands = { 'py': 'python %s', 'md': 'markdown "%s" "%s.html"; gnome-open "%s.html"', } commands['md'] % 'file.md' But like you see, the commmands['md'] uses the parameter 3 times, but the commands['py'] just use once. How can I repeat the parameter without changing the last line (so, just passing the parameter one time?)

    Read the article

  • Should I create protected constructor for my singleton classes?

    - by Vijay Shanker
    By design, in Singleton pattern the constructor should be marked private and provide a creational method retuning the private static member of the same type instance. I have created my singleton classes like this only. public class SingletonPattern {// singleton class private static SingletonPattern pattern = new SingletonPattern(); private SingletonPattern() { } public static SingletonPattern getInstance() { return pattern; } } Now, I have got to extend a singleton class to add new behaviors. But the private constructor is not letting be define the child class. I was thinking to change the default constructor to protected constructor for the singleton base class. What can be problems, if I define my constructors to be protected? Looking for expert views....

    Read the article

  • Python: Pass parameter one time, but use more times

    - by Gabriel L. Oliveira
    I'm trying to do this: commands = { 'py': 'python %s', 'md': 'markdown "%s" "%s.html"; gnome-open "%s.html"', } commands['md'] % 'file.md' But like you see, the commmands['md'] uses the parameter 3 times, but the commands['py'] just use once. How can I repeat the parameter without changing the last line (so, just passing the parameter one time?)

    Read the article

  • Date and time Query - problem

    - by Gold
    hi i try to run this query: select * from WorkTbl where ((Tdate = '20100414' AND Ttime = '06:00') and (Tdate <= '20100415' AND Ttime <= '06:00')) i have this date: 14/04/2010 and time: 14:00 i cant see hem, how to fix the query ? thank's in advance

    Read the article

  • Checking date against date range in Python

    - by Flowpoke
    I have a date variable: 2011-01-15 and I would like to get a boolean back if said date is within 3 days from TODAY. Im not quite sure how to construct this in Python. Im only dealing with date, not datetime. My working example is a "grace period". A user logs into my site and if the grace period is within 3 days of today, additional scripts, etc. are omitted for that user. I know you can do some fancy/complex things in Python's date module(s) but Im not sure where to look.

    Read the article

  • View artifacts leaking into the model of MVC

    - by Jono
    In an ASP.NET MVC application (which has very little chance of having its view technology ported to something non-HTML, but whose functional requirements evolve weekly,) how much HTML should ideally be allowed to be directly represented in the Model? I might come across as a design bigot for this, but I regard it as bad practice to allow any view constructs to "leak" into the model in an MVC application (and vice versa). For example, a Model that represents an item you're about to purchase should know nothing about the HTML check box that says "add giftwrap/message", nor should it know about any HTML drop down lists for payment card types. Conversely the View shouldn't be doing work like figuring out button text by translating keys into values (by looking in resource files.)

    Read the article

  • Why doesn't TextBlock databinding call ToString() on a property whose compile-time type is an interf

    - by Jay
    This started with weird behaviour that I thought was tied to my implementation of ToString(), and I asked this question: http://stackoverflow.com/questions/2916068/why-wont-wpf-databindings-show-text-when-tostring-has-a-collaborating-object It turns out to have nothing to do with collaborators and is reproducible. When I bind Label.Content to a property of the DataContext that is declared as an interface type, ToString() is called on the runtime object and the label displays the result. When I bind TextBlock.Text to the same property, ToString() is never called and nothing is displayed. But, if I change the declared property to a concrete implementation of the interface, it works as expected. Is this somehow by design? If so, any idea why? To reproduce: Create a new WPF Application (.NET 3.5 SP1) Add the following classes: public interface IFoo { string foo_part1 { get; set; } string foo_part2 { get; set; } } public class Foo : IFoo { public string foo_part1 { get; set; } public string foo_part2 { get; set; } public override string ToString() { return foo_part1 + " - " + foo_part2; } } public class Bar { public IFoo foo { get { return new Foo {foo_part1 = "first", foo_part2 = "second"}; } } } Set the XAML of Window1 to: <Window x:Class="WpfApplication1.Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="Window1" Height="300" Width="300"> <StackPanel> <Label Content="{Binding foo, Mode=Default}"/> <TextBlock Text="{Binding foo, Mode=Default}"/> </StackPanel> </Window> in Window1.xaml.cs: public partial class Window1 : Window { public Window1() { InitializeComponent(); DataContext = new Bar(); } } When you run this application, you'll see the text only once (at the top, in the label). If you change the type of foo property on Bar class to Foo (instead of IFoo) and run the application again, you'll see the text in both controls.

    Read the article

  • Any ideas for developing a Risc Processor friendly string allocator?

    - by Richard Fabian
    I'm working on some tools to enable high throughput data-oriented development, and one thing that I've not got an immediate answer for is how you go about allocating strings quickly. On risc processors you've got another problem of implementation that the CPU doesn't like branching, which is what I'm trying to minimise or avoid. Also, cache coherence is important on most CPUs, so that's gotta be influential in the design too. So, how would you go about reducing the overhead for a generic string allocator? Sometimes it's easier to solve a more explicit problem, so any ideas for string sizes of 5-30?

    Read the article

  • R: Forecast package: Automatic algorithm for composite model involving ETS and AR

    - by phanikishan
    Hey, I would like to write a code involving automatic selection of a best composite model using ETS as well as autoregressive models. What is the criteria I should base my selection on? Also if I'm using the auto.arima function for deducing number of AR terms and corresponding coefficients from the forecast package in R, does my input series necessarily have to be stationary? or the value for d would be automatically selected thus returning a non-stationary model? Thanks, Phani

    Read the article

  • openmp program elapsed time not scaling with increased threads

    - by Griff
    I've got this openmp fortran program doing an embarrassingly parallel problem - do loop over 512^3 elements. See output below. Why would there be such strange behavior in the elapsed time as a function of threads? I thought it would peak at a sweet spot then slowly degrade. This clearly isn't happening. Perhaps I misunderstand something about openmp. Threads, omp_get_wtime 1, 103.76298500015400 2, 65.346454000100493 4, 45.923643999965861 7, 38.074195000110194 8, 36.968765000114217 9, 39.45981499995105 10,40.753379000118002 12,39.577559999888763 14,37.909950000001118

    Read the article

  • OOP App Architecture: Which layer does a lazy loader sit in?

    - by JW
    I am planning the implemention an Inheritance Mapper pattern for an application component http://martinfowler.com/eaaCatalog/inheritanceMappers.html One feature it needs to have is for a domain object to reference a large list of aggreageted items (10,000 other domain objects) So I need some kind of lazy loading collection to be passed out of the aggregate root domain object to other domain objects. To keep my (php) model scripts organised i am storing them in two folders: MyComponent\ controllers\ models\ domain\ <- domain objects, DDD repository, DDD factory daccess\ <- PoEAA data mappers, SQL queries etc views\ But now I am racking my brains wondering where my lazy loading collection sits. Any suggestions / justifications for putting it in one place over another another? DDD = Domain Driven Design Patterns, Eric Evans - book PoEAA = Patterns of Application Architecture Patterns, Martin Fowler - book

    Read the article

  • Why are difference lists more efficient than regular concatenation?

    - by Craig Innes
    I am currently working my way through the Learn you a haskell book online, and have come to a chapter where the author is explaining that some list concatenations can be ineffiecient: For example ((((a ++ b) ++ c) ++ d) ++ e) ++ f Is supposedly inefficient. The solution the author comes up with is to use 'difference lists' defined as newtype DiffList a = DiffList {getDiffList :: [a] -> [a] } instance Monoid (DiffList a) where mempty = DiffList (\xs -> [] ++ xs) (DiffList f) `mappend` (DiffList g) = DiffList (\xs -> f (g xs)) I am struggling to understand why DiffList is more computationally efficient than a simple concatenation in some cases. Could someone explain to me in simple terms why the above example is so inefficient, and in what way the DiffList solves this problem?

    Read the article

  • Is there such a thing as too many tables?

    - by Stacey
    I've been searching stackoverflow for about an hour now and couldn't find any topics related, so I apologize if this is a duplicate question. My inquery is this. Is there a point at which there are too many tables in a database? Even if the structure is well organized, thought out, and perfectly facilitates the design intent? I have a database that is quickly approaching 40 tables - about 10 main ones, and over 30 ancillary tables (junction tables, 'enumeration' tables, etc). Am I just a bad developer - or should I be trying something different? It seems like so many to me, I'm really afraid at how it will impact the performance of the project. I have done a lot of condensing where possible, grouped similar things where possible, etc. The database is built in MS-SQL 2008.

    Read the article

  • C# Running several tasks at different intervals

    - by Nir
    A design question: I'd like to build a Windows service that executes different commands at different intervals. For simplicity's sake, let's say I want to run some batch files. The input it gets is a list of files and the intervals at which to execute. For example: a.bat - 4 minutes b.bat - 1 minute c.bat - 1 minute d.bat - 2 minutes I was thinking about sorting them according to intervals, and then setting a timer for each of the intervals. I'm not sure this is the best solution and I'd be happy to hear some feedbacks. Thanks!

    Read the article

  • Fitts Law, applying it to touch screens

    - by Caylem
    Been reading a lot into UI design lately and Fitts Law keeps popping up. Now from what i gather its basically the larger an item is, and the closer it is to your cursor, the easier it is to click on. So what about touch screen devices where the input comes from multiple touches or just single touches. What are the fundamentals to take into account considering this? Should it be something like, the hands of the user are on the sides of the device so the buttons should be close to the left and right hand sides of the device? Thanks

    Read the article

< Previous Page | 271 272 273 274 275 276 277 278 279 280 281 282  | Next Page >