Search Results

Search found 17194 results on 688 pages for 'john little'.

Page 591/688 | < Previous Page | 587 588 589 590 591 592 593 594 595 596 597 598  | Next Page >

  • Advice: Python Framework Server/Worker Queue management (not Website)

    - by Muppet Geoff
    I am looking for some advice/opinions of which Python Framework to use in an implementation of multiple 'Worker' PCs co-ordinated from a central Queue Manager. For completeness, the 'Worker' PCs will be running Audio Conversion routines (which I do not need advice on, and have standalone code that works). The Audio conversion takes a long time, and I need to co-ordinate an arbitrary number of the 'Workers' from a central location, handing them conversion tasks (such as where to get the source files, or where to ask for the job configuration) with them reporting back some additional info, such as the runtime of the converted audio etc. At present, I have a script that makes a webservice call to get the 'configuration' for a conversion task, based on source files located on the worker already (we manually copy the source files to the worker, and that triggers a conversion routine). I want to change this, so that we can distribute conversion tasks ("Oy you, process this: xxx") based on availability, and in an ideal world, based on pending tasks too. There is a chance that Workers can go offline mid-conversion (but this is not likely). All the workers are Windows based, the co-ordinator can be WIndows or Linux. I have (in my initial searches) come across the following - and I know that some are cross-dependent: Celery (with RabbitMQ) Twisted Django Using a framework, rather than home-brewing, seems to make more sense to me right now. I have a limited timeframe in which to develop this functional extension. An additional consideration would be using a Framework that is compatible with PyQT/PySide so that I can write a simple UI to display Queue status etc. I appreciate that the specifics above are a little vague, and I hope that someone can offer me a pointer or two. Again: I am looking for general advice on which Python framework to investigate further, for developing a Server/Worker 'Queue management' solution, for non-web activities (this is why DJango didn't seem the right fit).

    Read the article

  • sum of square of each elements in the vector using for_each

    - by pierr
    Hi, As the function accepted by for_each take only one parameter (the element of the vector), I have to define a static int sum = 0 somewhere so that It can be accessed after calling the for_each . I think this is awkward. Any better way to do this (still use for_each) ? #include <algorithm> #include <vector> #include <iostream> using namespace std; static int sum = 0; void add_f(int i ) { sum += i * i; } void test_using_for_each() { int arr[] = {1,2,3,4}; vector<int> a (arr ,arr + sizeof(arr)/sizeof(arr[0])); for_each( a.begin(),a.end(), add_f); cout << "sum of the square of the element is " << sum << endl; } In Ruby, We can do it this way: sum = 0 [1,2,3,4].each { |i| sum += i*i} #local variable can be used in the callback function puts sum #=> 30 Would you please show more examples how for_each is typically used in practical programming (not just print out each element)? Is it possible use for_each simulate 'programming pattern' like map and inject in Ruby (or map /fold in Haskell). #map in ruby >> [1,2,3,4].map {|i| i*i} => [1, 4, 9, 16] #inject in ruby [1, 4, 9, 16].inject(0) {|aac ,i| aac +=i} #=> 30 EDIT: Thank you all. I have learned so much from your replies. We have so many ways to do the same single thing in C++ , which makes it a little bit difficult to learn. But it's interesting :)

    Read the article

  • Fastest way to learn Flex and Java EE?

    - by LostWebNewbie
    Ok so me and my 2 friends have to make a webapp and well we think it's a good opportunity to learn JEE and Flex. The thing is we have very little knowledge about them and we have only 3 months to do it (it doesn't have to be super complicated). So my question is: what, in your opinion, would be the fastest way to learn them both? I guess we need to know some JSP, Serlvets, JPA (??), Flex, maybe JavaScript+CSS? Anything else like EJB? Should we also learn Spring (or Struts)? Obviously reading books would be a good idea, but I bet we won't make the deadline if we try to read all the books... @Edit: I know the basics of JSP/Servlets (read Head First JSP&Servlets) but I made only 1 project so far (a semi-decent hangman with JSP/Serlvets and JPA for persistance) that's about it. Flex - I'm just starting, I know really the basics of mxml and as3. As for why: 1) because we need to do a project for the uni and well I was thinking bout becoming a web dev (yup - jee+flex) after graduation, this is the perfect opportunity to learn them.

    Read the article

  • Should TcpClient be used for this scenario?

    - by Martín Marconcini
    I have to communicate with an iPhone. I have its IP Address and the port (obtained via Bonjour). I need to send a header that is “0x50544833” (or similar, It’s an HEX number), then the size of the data (below) and then the data itself. The data is just a string that looks like this: <?xml version="1.0" encoding="utf-8"?> <!DOCTYPE plist SYSTEM "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>clientName</key> <string>XXX</string> <key>clientService</key> <string>0be397e7-21f4-4d3c-89d0-cdf179a7e14d</string> <key>registerCode</key> <string>0000</string> </dict> </plist> The requirement also says that I must send the data in little endian format (which I think is the default for Intel anyway). So it would be: hex_number + size of data + string_with_the_above_xml. I need to send that to the iPhone and read the response. What would be, according to your experience, the best way to send this data (and read the response)?

    Read the article

  • Send Special Keys to Gtk.VteTerminal

    - by Ubersoldat
    Hi I have this OSS Project called Monocaffe connections manager which uses the Gtk.VteTerminal widget from PyGTK. A nice feature is that it allows the users to send commands to different servers' consoles (cluster mode) using a Gtk.TextView for the input. The way I send key strokes to each Gtk.VteTerminal is by using the feed_child method. For common keys there's no problem: I simply feed what the TextView receives to all the terminals, but when doing so with special keys I get into a little trouble. For "Return" I catch the event and feed the terminal a '\n'. For back-space is the same, catch the event and feed a '\b'. def cluster_backspace(self, widget): return self.cluster_send_key('\b') The problem comes with other keys like Tab, Arrows, Esc which I don't know how to feed as str to the terminal to recognize them. In the case of Esc is a real pain, because the users can edit the same file on different servers using vi, but cannot escape insert mode. Anyway, I'm not looking for a complete solution, just ideas since I've ran out of them. Thanks.

    Read the article

  • C# Convert string to nullable type (int, double, etc...)

    - by Nathan Koop
    I am attempting to do some data conversion. Unfortunately, much of the data is in strings, where it should be int's or double, etc... So what I've got is something like: double? amount = Convert.ToDouble(strAmount); The problem with this approach is if strAmount is empty, if it's empty I want it to amount to be null, so when I add it into the database the column will be null. So I ended up writing this: double? amount = null; if(strAmount.Trim().Length>0) { amount = Convert.ToDouble(strAmount); } Now this works fine, but I now have five lines of code instead of one. This makes things a little more difficult to read, especially when I have a large amount of columns to convert. I thought I'd use an extension to the string class and generic's to pass in the type, this is because it could be a double, or an int, or a long. So I tried this: public static class GenericExtension { public static Nullable<T> ConvertToNullable<T>(this string s, T type) where T: struct { if (s.Trim().Length > 0) { return (Nullable<T>)s; } return null; } } But I get the error: Cannot convert type 'string' to 'T?' Is there a way around this? I am not very familiar with creating methods using generics.

    Read the article

  • What different terms mean the same thing (or don't, but people think they do)?

    - by Matthew Jones
    One of the pitfalls I run into on a daily basis is customers saying one thing while meaning another. Usually, this is just due to a miscommunication somewhere, but occasionally they are, in fact, saying the same thing I am just using a different term. For example, one of my customers the other day mentioned a feature he called, "find as you type." Being a little confused, I asked him what he meant, and he described the feature in Google where, once you start typing a search query, Google suggests other, popular queries that match the letters you have typed. Click! He meant AutoComplete! He was not wrong, it is just that I had never heard that term before. In the spirit of reducing confusion, what terms can you think of that are different but mean, essentially, the same thing? Also, what terms do people think mean the same thing, but don't. Please differentiate between the two. Please only one set of terms per answer, so we can vote on the best ones.

    Read the article

  • Link compatibility between C++ and D

    - by Caspin
    D easily interfaces with C. D just as easily interfaces with C++, but (and it's a big but) the C++ needs to be extremely trivial. The code cannot use: namespaces templates multiple inheritance mix virtual with non-virtual methods more? I completely understand the inheritance restriction. The rest however, feel like artificial limitations. Now I don't want to be able to use std::vector<T> directly, but I would really like to be able to link with std::vector<int> as an externed template. The C++ interfacing page has this particularly depressing comment. D templates have little in common with C++ templates, and it is very unlikely that any sort of reasonable method could be found to express C++ templates in a link-compatible way with D. This means that the C++ STL, and C++ Boost, likely will never be accessible from D. Admittedly I'll probably never need std::vector while coding in D, but I'd love to use QT or boost. So what's the deal. Why is it so hard to express non-trivial C++ classes in D? Would it not be worth it to add some special annotations or something to express at least namespaces?

    Read the article

  • What are the benefits and risks of moving to a Model Driven Architecture approach?

    - by Tone
    I work for a company with about 350 employees and we are in the process of growing. Our current codebase is not structured very well and we are looking both at how to improve it immediately (by organizing objects into namespaces, separating concerns, etc.) and moving to a model driven architecture approach, where we model and design everything first with uml, then generate code from that model. We have been looking heavily at Sparx Systems Enterprise Architect (EA) (which is UML 2.0 capable) and we are also considering the tools in VS 2010. I know there are other tools out there (Rational XDE being one) but I really do not think we can spend $1500+ per license at this point. I'm not looking for answers on which tool is better than another but more for experiences moving from a cowboy coding environment (that is, little planning and design, just jump in and start coding) to a model driven architecture. Looking back was it helpful to your organization? What are the pain points? What are the risks? What are the benefits?

    Read the article

  • Algorithm to pick values from set to match target value?

    - by CSharperWithJava
    I have a fixed array of constant integer values about 300 items long (Set A). The goal of the algorithm is to pick two numbers (X and Y) from this array that fit several criteria based on input R. Formal requirement: Pick values X and Y from set A such that the expression X*Y/(X+Y) is as close as possible to R. That's all there is to it. I need a simple algorithm that will do that. Additional info: The Set A can be ordered or stored in any way, it will be hard coded eventually. Also, with a little bit of math, it can be shown that the best Y for a given X is the closest value in Set A to the expression X*R/(X-R). Also, X and Y will always be greater than R From this, I get a simple iterative algorithm that works ok: int minX = 100000000; int minY = 100000000; foreach X in A if(X<=R) continue; else Y=X*R/(X-R) Y=FindNearestIn(A, Y);//do search to find closest useable Y value in A if( X*Y/(X+Y) < minX*minY/(minX+minY) ) then minX = X; minY = Y; end end end I'm looking for a slightly more elegant approach than this brute force method. Suggestions?

    Read the article

  • Programmatically change the icon of the executable

    - by Dennis Delimarsky
    I am developing an application called WeatherBar. Its main functionality is based on its interaction with the Windows 7 taskbar — it changes the icon depending on the weather conditions in a specific location. The icons I am using in the application are all stored in a compiled native resource file (.res) — I am using it instead of the embedded resource manifest for icons only. By default, I modify the Icon property of the main form to change the icons accordingly and it works fine, as long as the icon is not pinned to the taskbar. When it gets pinned, the icon in the taskbar automatically switches to the default one for the executable (with index 0 in the resource file). After doing a little bit of research, I figured that a way to change the icon would be changing the shortcut icon (as all pinned applications are actually shortcuts stored in the user folder). But it didn't work. I assume that I need to change the icon for the executable, and therefore use UpdateResource, but I am not entirely sure about this. My executable is not digitally signed, so it shouldn't be an issue modifying it. What would be the way to solve this issue?

    Read the article

  • UITableViewRowAnimationBottom doesn't work for last row

    - by GendoIkari
    I've come across a very similar question here: Inserting row to end of table with UITableViewRowAnimationBottom doesn't animate., though no answers have been given. His code was also a little different than mine. I have an extremely simple example, built from the Navigation application template. NSMutableArray *items; - (void)viewDidLoad { [super viewDidLoad]; items = [[NSMutableArray array] retain]; self.navigationItem.rightBarButtonItem = [[[UIBarButtonItem alloc] initWithBarButtonSystemItem:UIBarButtonSystemItemAdd target:self action:@selector(addItem)] autorelease]; } - (void)addItem{ [items insertObject:@"new" atIndex:0]; [self.tableView insertRowsAtIndexPaths:[NSArray arrayWithObject:[NSIndexPath indexPathForRow:0 inSection:0]] withRowAnimation:UITableViewRowAnimationBottom]; } - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { return items.count; } - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"Cell"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier] autorelease]; } cell.textLabel.text = [items objectAtIndex:indexPath.row]; return cell; } - (void)tableView:(UITableView *)tableView commitEditingStyle:(UITableViewCellEditingStyle)editingStyle forRowAtIndexPath:(NSIndexPath *)indexPath { if (editingStyle == UITableViewCellEditingStyleDelete) { [items removeObjectAtIndex:indexPath.row]; [tableView deleteRowsAtIndexPaths:[NSArray arrayWithObject:indexPath] withRowAnimation:UITableViewRowAnimationBottom]; } } The problem is, when I either insert or delete the very last row in the table, the animation doesn't work at all; the row just appears or disappears instantly. This only happens with UITableViewRowAnimationBottom, but that's the animation that makes the most sense for creating or deleting table cells in this way. Is this a bug in Apple's framework? Or does it do this on purpose? Would it make sense to add an extra cell to the count, and then setup this cell so that it looks like it's not there at all, just to get around this behavior?

    Read the article

  • C++ struct, public data members and inheritance

    - by Marius
    Is it ok to have public data members in a C++ class/struct in certain particular situations? How would that go along with inheritance? I've read opinions on the matter, some stated already here http://stackoverflow.com/questions/952907/practices-on-when-to-implement-accessors-on-private-member-variables-rather-than http://stackoverflow.com/questions/670958/accessors-vs-public-members or in books/articles (Stroustrup, Meyers) but I'm still a little bit in the shade. I have some configuration blocks that I read from a file (integers, bools, floats) and I need to place them into a structure for later use. I don't want to expose these externally just use them inside another class (I actually do want to pass these config parameters to another class but don't want to expose them through a public API). The fact is that I have many such config parameters (15 or so) and writing getters and setters seems an unnecessary overhead. Also I have more than one configuration block and these are sharing some of the parameters. Making a struct with all the data members public and then subclassing does not feel right. What's the best way to tackle that situation? Does making a big struct to cover all parameters provide an acceptable compromise (I would have to leave some of these set to their default values for blocks that do not use them)?

    Read the article

  • Given the Following List (UL), how can it be serialized and sent to the Database

    - by nobosh
    I have the follow structure which is created with a nested sortable: <UL id="container"> <LI id="main1"> <input type="checkbox" /> Lorem ipsum dolor sit amet, consectetur <UL> <LI id="child2"> <input type="checkbox" /> In hac habitasse platea dictumst. <UL></UL> </LI> </UL> </LI> <LI id="main3"> <input type="checkbox" /> In hac habitasse platea dictumst. <UL></UL> </LI> <LI id="main4"> <input type="checkbox" /> In hac habitasse platea dictumst. <UL></UL> </LI> <LI id="main5"> <input type="checkbox" /> In hac habitasse platea dictumst. <UL></UL> </LI> </UL> Where I'm stuck is how to send this back to the database. I"m guessing that it needs to be serialized, does that sound right to? so that is looks something like: item[]=main1&item[]=221&item_221[]=21&item_221[]=2&item_221[]=211&item[]=22 I'm a little lost at this point and appreciate any tips to move me in the right direction. Thanks, B

    Read the article

  • Re-usable Obj-C classes with custom values: The right way

    - by Prairiedogg
    I'm trying to reuse a group of Obj-C clases between iPhone applications. The values that differ from app to app have been isolated and I'm trying to figure out the best way to apply these custom values to the classes on an app-to-app basis. Should I hold them in code? // I might have 10 customizable values for each class, that's a long signature! CarController *controller = [[CarController alloc] initWithFontName:@"Vroom" engine:@"Diesel" color:@"Red" number:11]; Should I store them in a big settings.plist? // Wasteful! I sometimes only use 2-3 of 50 settings! AllMyAppSettings *settings = [[AllMyAppSettings alloc] initFromDisk:@"settings.plist"]; MyCustomController *controller = [[MyCustomController alloc] initWithSettings:settings]; [settings release]; Should I have little, optional n_settings.plists for each class? // Sometimes I customize CarControllerSettings *carSettings = [[CarControllerSettings alloc] initFromDisk:@"car_settings.plist"]; CarController *controller = [[CarController alloc] initWithSettings:carSettings]; [carSettings release]; // Sometimes I don't, and CarController falls back to internally stored, reasonable defaults. CarController *controller = [[CarController alloc] initWithSettings:nil]; Or is there an OO solution that I'm not thinking of at all that would be better?

    Read the article

  • Why does setting the margin on a div not affect the position of child content?

    - by DanM
    I'd like to understand a little more clearly how css margins work with divs and child content. If I try this... <div style="clear: both; margin-top: 2em;"> <input type="submit" value="Save" /> </div> ...the Save button is right up against the User Role (margin fail): If I change it to this... <div style="clear: both;"> <input style="margin-top: 2em;" type="submit" value="Save" /> </div> ...there is a gap between the Save button and the User Role (margin win): Questions: Can someone explain what I'm observing? Why doesn't putting a margin on the div cause the input to move down? Why must I put the margin on the input itself? There must be some fundamental law of css I am not grasping.

    Read the article

  • useer degined Copy ctor, and copy-ctors further down the chain - compiler bug ? programers brainbug

    - by J.Colmsee
    Hi. i have a little problem, and I am not sure if it's a compiler bug, or stupidity on my side. I have this struct : struct BulletFXData { int time_next_fx_counter; int next_fx_steps; Particle particles[2];//this is the interesting one ParticleManager::ParticleId particle_id[2]; }; The member "Particle particles[2]" has a self-made kind of smart-ptr in it (resource-counted texture-class). this smart-pointer has a default constructor, that initializes to the ptr to 0 (but that is not important) I also have another struct, containing the BulletFXData struct : struct BulletFX { BulletFXData data; BulletFXRenderFunPtr render_fun_ptr; BulletFXUpdateFunPtr update_fun_ptr; BulletFXExplosionFunPtr explode_fun_ptr; BulletFXLifetimeOverFunPtr lifetime_over_fun_ptr; BulletFX( BulletFXData data, BulletFXRenderFunPtr render_fun_ptr, BulletFXUpdateFunPtr update_fun_ptr, BulletFXExplosionFunPtr explode_fun_ptr, BulletFXLifetimeOverFunPtr lifetime_over_fun_ptr) :data(data), render_fun_ptr(render_fun_ptr), update_fun_ptr(update_fun_ptr), explode_fun_ptr(explode_fun_ptr), lifetime_over_fun_ptr(lifetime_over_fun_ptr) { } /* //USER DEFINED copy-ctor. if it's defined things go crazy BulletFX(const BulletFX& rhs) :data(data),//this line of code seems to do a plain memory-copy without calling the right ctors render_fun_ptr(render_fun_ptr), update_fun_ptr(update_fun_ptr), explode_fun_ptr(explode_fun_ptr), lifetime_over_fun_ptr(lifetime_over_fun_ptr) { } */ }; If i use the user-defined copy-ctor my smart-pointer class goes crazy, and it seems that calling the CopyCtor / assignment operator aren't called as they should. So - does this all make sense ? it seems as if my own copy-ctor of struct BulletFX should do exactly what the compiler-generated would, but it seems to forget to call the right constructors down the chain. compiler bug ? me being stupid ? Sorry about the big code, some small example could have illustrated too. but often you guys ask for the real code, so well - here it is :D

    Read the article

  • Faster Insertion of Records into a Table with SQLAlchemy

    - by Kyle Brandt
    I am parsing a log and inserting it into either MySQL or SQLite using SQLAlchemy and Python. Right now I open a connection to the DB, and as I loop over each line, I insert it after it is parsed (This is just one big table right now, not very experienced with SQL). I then close the connection when the loop is done. The summarized code is: log_table = schema.Table('log_table', metadata, schema.Column('id', types.Integer, primary_key=True), schema.Column('time', types.DateTime), schema.Column('ip', types.String(length=15)) .... engine = create_engine(...) metadata.bind = engine connection = engine.connect() .... for line in file_to_parse: m = line_regex.match(line) if m: fields = m.groupdict() pythonified = pythoninfy_log(fields) #Turn them into ints, datatimes, etc if use_sql: ins = log_table.insert(values=pythonified) connection.execute(ins) parsed += 1 My two questions are: Is there a way to speed up the inserts within this basic framework? Maybe have a Queue of inserts and some insertion threads, some sort of bulk inserts, etc? When I used MySQL, for about ~1.2 million records the insert time was 15 minutes. With SQLite, the insert time was a little over an hour. Does that time difference between the db engines seem about right, or does it mean I am doing something very wrong?

    Read the article

  • Open Maps app from Code : Where to find the "Current Location"?

    - by Simon
    I am opening Maps app to show directions from user's Current Location to a destination coordinate, from my code. I am using the following code to open the Maps app. I am calling this code when a button is pressed. - (void)showDirectionsToHere { CLLocationCoordinate2D currentLocation = [self getCurrentLocation]; // LINE 1 NSString* url = [NSString stringWithFormat: @"http://maps.google.com/maps?saddr=%f,%f&daddr=%f,%f", destCoordinate.latitude + 0.1, destCoordinate.longitude, destCoordinate.latitude, destCoordinate.longitude];//8.3, 12.1 [[UIApplication sharedApplication] openURL:[NSURL URLWithString:url]]; } Here [self getCurrentLocation] in LINE 1 uses CLLocationManager to determine the Current Location and returns the value. Note: I have not yet implemented the LINE1. I've just planned to do in that way. My question is, Is this a good practice to calculate the Current Location, at the time the Maps app is called? [self getCurrentLocation] will retrun the Current Location before openURL gets called? I have to determine the Current Location well before opening the Maps app? I am little bit confused about these things. Kindly guide me. Thanks.

    Read the article

  • Convert VBA to VBS

    - by dnLL
    I have a little VBA script with some functions that I would like to convert to a single VBS file. Here is an example of what I got: Private Declare Function GetPrivateProfileString Lib "kernel32" Alias "GetPrivateProfileStringA" (ByVal lpApplicationName As String, ByVal lpKeyName As Any, ByVal lpDefault As String, ByVal lpReturnedString As String, ByVal nSize As Long, ByVal lpFileName As String) As Long Private Function ReadIniFileString(ByVal Sect As String, ByVal Keyname As String) As String Dim Worked As Long Dim RetStr As String * 128 Dim StrSize As Long Dim iNoOfCharInIni As Integer Dim sIniString, sProfileString As String iNoOfCharInIni = 0 sIniString = "" If Sect = "" Or Keyname = "" Then MsgBox "Erreur lors de la lecture des paramètres dans " & IniFileName, vbExclamation, "INI" Access.Application.Quit Else sProfileString = "" RetStr = Space(128) StrSize = Len(RetStr) Worked = GetPrivateProfileString(Sect, Keyname, "", RetStr, StrSize, IniFileName) If Worked Then iNoOfCharInIni = Worked sIniString = Left$(RetStr, Worked) End If End If ReadIniFileString = sIniString End Function And then, I need to use this function to put some values in strings. VBS doesn't seem to like any of my var declaration ((Dim) MyVar As MyType). If I'm able to adapt that code to VBS, I should be able to do the rest of my functions too. How can I adapt/convert this to VBS? Thank you.

    Read the article

  • Fortran pointer as an argument to interface procedure

    - by icarusthecow
    Im trying to use interfaces to call different subroutines with different types, however, it doesnt seem to work when i use the pointer attribute. for example, take this sample code MODULE ptr_types TYPE, abstract :: parent INTEGER :: q END TYPE TYPE, extends(parent) :: child INTEGER :: m END TYPE INTERFACE ptr_interface MODULE PROCEDURE do_something END INTERFACE CONTAINS SUBROUTINE do_something(atype) CLASS(parent), POINTER :: atype ! code determines that this allocation is correct from input ALLOCATE(child::atype) WRITE (*,*) atype%q END SUBROUTINE END MODULE PROGRAM testpass USE ptr_types CLASS(child), POINTER :: ctype CALL ptr_interface(ctype) END PROGRAM This gives error Error: There is no specific subroutine for the generic 'ptr_interface' at (1) however if i remove the pointer attribute in the subroutine it compiles fine. Now, normally this wouldnt be a problem, but for my use case i need to be able to treat that argument as a pointer, mainly so i can allocate it if necessary. Any suggestions? Mind you I'm new to fortran so I may have missed something edit: forgot to put the allocation in the parents subroutine, the initial input is unallocated EDIT 2 this is my second attempt, with caller side casting MODULE ptr_types TYPE, abstract :: parent INTEGER :: q END TYPE TYPE, extends(parent) :: child INTEGER :: m END TYPE TYPE, extends(parent) :: second INTEGER :: meow END TYPE CONTAINS SUBROUTINE do_something(this, type_num) CLASS(parent), POINTER :: this INTEGER type_num IF (type_num == 0) THEN ALLOCATE (child::this) ELSE IF (type_num == 1) THEN ALLOCATE (second::this) ENDIF END SUBROUTINE END MODULE PROGRAM testpass USE ptr_types CLASS(child), POINTER :: ctype SELECT TYPE(ctype) CLASS is (parent) CALL do_something(ctype, 0) END SELECT WRITE (*,*) ctype%q END PROGRAM however this still fails. in the select statement it complains that parent must extend child. Im sure this is due to restrictions when dealing with the pointer attribute, for type safety, however, im looking for a way to convert a pointer into its parent type for generic allocation. Rather than have to write separate allocation functions for every type and hope they dont collide in an interface or something. hopefully this example will illustrate a little more clearly what im trying to achieve, if you know a better way let me know

    Read the article

  • Where are the network boundaries in the Java Connector Architecture (JCA)?

    - by Laird Nelson
    I am writing a JCA resource adapter. I'm also, as I go, trying to fully understand the connection management portion of the JCA specification. As a thought experiment, pretend that the only client of this adapter will be a Swing Java Application Client located on a different machine. Also assume that the resource adapter will communicate with its "enterprise information system" (EIS) over the network as well. As I understand the JCA specification, the .rar file is deployed to the application server. The application server creates the .rar file's implementation of the ManagedConnectionFactory interface. It then asks it to produce a connection factory, which is the opaque object that is deployed to JNDI for the user to use to obtain a connection to the resource. (In the case of JDBC, the connection factory is a javax.sql.DataSource.) It is a requirement that the connection factory retain a reference to the application-server-supplied ConnectionManager, which, in turn, is required to be Serializable. This makes sense--in order for the connection factory to be stored in JNDI, it must be serializable, and in order for it to keep a reference to the ConnectionManager, the ConnectionManager must also be serializable. So fine, this little object graph gets installed in the application client's JNDI tree. This is where I start to get queasy. Is the ConnectionManager--the piece supplied by the application server that is supposed to handle connection management, sharing, pooling, etc.--wholly present on the client at this point? One of its jobs is to create ManagedConnection instances, and a ManagedConnection is not required to be Serializable, and the user connection handles it vends are also not required to be Serializable. That suggests to me that the whole connection pooling machinery is shipped wholesale to the application client and stuffed into its JNDI tree. Does this all mean that JCA interactions from the client side bypass the server-side componentry of the application server? Where are the network boundaries in the JCA API?

    Read the article

  • How to accommodate for the different screen resolution of iPhone 4?

    - by mystify
    This is a programming question! Read on before you vote to close! According to Apple, iPhone 4 has a new screen resolution: 3.5-inch (diagonal) widescreen Multi-Touch display 960-by-640-pixel resolution at 326 ppi This little detail affects our apps in a heavy way. Most of the demo apps on the net have one thing in common: They position views in the believe that the screen has a fixed size of 320 x 480 pixels. So what most -if not all- developers do is: They designed everything in such a way, that a touchable area is -for example- 50 x 50 pixels big. Just enough to tap it. Things have been positioned relative to the upper left, to reach a specific position on screen - let's say the center, or somewhere at the bottom. Edit: It seems Apple has integrated an switch that allows to tell if an app is highRes or not. Nice. When we develop high-resolution apps, probably they won't work on older devices. And if they did, they would suffer a lot from 4-times the size of any image, having to scale them down in memory. This is community wiki. Just add anything that you think is relevant to this huge problem (constant screen res was one of the main reasons why I didn't go for Android!!).

    Read the article

  • Why would an image (the Mandelbrot) be skewed and wrap around?

    - by Sean D
    So I just wrote a little snippet to generate the Mandelbrot fractal and imagine my surprise when it came out all ugly and skewed (as you can see at the bottom). I'd appreciate a point in the direction of why this would even happen. It's a learning experience and I'm not looking for anyone to do it for me, but I'm kinda at a dead end debugging it. The offending generation code is: module Mandelbrot where import Complex import Image main = writeFile "mb.ppm" $ imageMB 1000 mandelbrotPixel x y = mb (x:+y) (0:+0) 0 mb c x iter | magnitude x > 2 = iter | iter >= 255 = 255 | otherwise = mb c (c+q^2) (iter+1) where q = x --Mandelbrot --q = (abs.realPart $ x) :+ (abs.imagPart $ x) --Burning Ship argandPlane x0 x1 y0 y1 width height = [(x,y)| y<-[y1,(y1-dy)..y0], --traverse from x<-[x0,(x0+dx)..x1]] --top-left to bottom-right where dx = (x1 - x0)/width dy = (y1 - y0)/height drawPicture :: (a->b->c)->(c->Colour)->[(a,b)]->Image drawPicture function colourFunction plane = map (colourFunction.uncurry function) plane imageMB s = createPPM s s $ drawPicture mandelbrotPixel (\x->[x,x,x]) $ argandPlane (-1.8) (-1.7) (0.02) 0.055 s' s' where s' = fromIntegral s And the image code (which I'm fairly confident in) is: module Image where type Colour = [Int] type Image = [Colour] createPPM :: Int -> Int -> Image -> String createPPM w h i = concat ["P3 ", show w, " ", show h, " 255\n", unlines.map (unwords.map show) $ i]

    Read the article

  • How to read from multiple queues in real-world?

    - by Leon Cullens
    Here's a theoretical question: When I'm building an application using message queueing, I'm going to need multiple queues support different data types for different purposes. Let's assume I have 20 queues (e.g. one to create new users, one to process new orders, one to edit user settings, etc.). I'm going to deploy this to Windows Azure using the 'minimum' of 1 web role and 1 worker role. How does one read from all those 20 queues in a proper way? This is what I had in mind, but I have little or no real-world practical experience with this: Create a class that spawns 20 threads in the worker role 'main' class. Let each of these threads execute a method to poll a different queue, and let all those threads sleep between each poll (of course with a back-off mechanism that increases the sleep time). This leads to have 20 threads (or 21?), and 20 queues that are being actively polled, resulting in a lot of wasted messages (each time you poll an empty queue it's being billed as a message). How do you solve this problem?

    Read the article

< Previous Page | 587 588 589 590 591 592 593 594 595 596 597 598  | Next Page >