Search Results

Search found 33445 results on 1338 pages for 'single instance storage'.

Page 451/1338 | < Previous Page | 447 448 449 450 451 452 453 454 455 456 457 458  | Next Page >

  • How to configure in crontab with condition statement for checks

    - by chz
    We like to monitor the NAS storage mounted on a linux box. We only like to be notified via mail when the usage exceeds a certain number say 80. We have only seen in linux books where most of them are calling shell scripts at certain times. How do we write inside crontab to only mail us if it exceeds 80 ? Usual eg 2 2 * * * /home/someUser/script.sh 2&1 | mail [email protected] Looking for solution like below 2 2 * * * if [ someNumber "80" ] ; then /home/someUser/script.sh | mail [email protected] Sincerely

    Read the article

  • MS Access ADP front end and SQL Server back end for field data collection?

    - by Brash Equilibrium
    I am an anthropologist. I am going to the field and will use a netbook to collect survey data. The survey forms will need to allow me to enter data into multiple tables, search tables, allow subforms, and be fast enough to not slow down my interview. I have considered storing the data in a SQL Server Express 2008 R2 server (there will be a lot of data) while using a Microsoft Access data project as a front end. To cut down the number of steps required to collect and store data, I'm considering using the netbook for both data storage and collection (after reading this article about SQL Server on a netbook). My questions are: (1) Is there a simpler solution that is also gratis (gratis because I already have a MS Access license from my workplace, and SQL Server Express is, obviously, free)? (2) Does my idea to store and collect data using the netbook make sense? Thank you.

    Read the article

  • Improve this generic abstract class

    - by Keivan
    I have the following abstract class design, I was wondering if anyone can suggest any improvements in terms of stronger enforcement of our requirements or simplifying implementing of the ControllerBase. //Dependency Provider base public abstract class ControllerBase<TContract, TType> where TType : TContract, class { public static TContract Instance { get { return ComponentFactory.GetComponent<TContract, TType>(); } } public TContract GetComponent<TContract, TType>() where TType : TContract, class { component = (TType)Activator.CreateInstance(typeof(TType), true); RegisterComponentInstance<TContract>(component); } } //Contract public interface IController { void DoThing(); } //Actual Class Logic public class Controller: ControllerBase<IController,Controller> { public void DoThing(); //internal constructor internal Controller(){} } //Usage public static void Main() { Controller.Instance.DoThing(); } The following facts should always be true, TType should always implement TContract (Enforced using a generic constraint) TContract must be an interface (Can't find a way to enforce it) TType shouldn't have public constructor, just an internal one, is there any way to Enforce that using ControllerBase? TType must be an concrete class (Didn't include New() as a generic constrain since the constructors should be marked as Internal)

    Read the article

  • How to convert string with double high/wide characters to normal string [VC++6]

    - by Shaitan00
    My application typically recieves a string in the following format: " Item $5.69 " Some contants I always expect: - the LENGHT always 20 characters - the start index of the text always [5] - and most importantly the index of the DECIMAL for the price always [14] In order to identify this string correctly I validate all the expected contants listed above .... Some of my clients have now started sending the string with Doube-High / Double-Wide values (pair of characters which represent a single readable character) similar to the following: " Item $x80x90.x81x91x82x92 " For testing I simply scan the string character-by-character, compare char[i] and char[i+1] and replace these pairs with their corresponding single character when a match is found (works fine) as follows: [Code] for (int i=0; i < sData.length(); i++) { char ch = sData[i] & 0xFF; char ch2 = sData[i+1] & 0xFF; if (ch == '\x80' && ch2 == '\x90') zData.replace("\x80\x90", "0"); else if (ch == '\x81' && ch2 == '\x91') zData.replace("\x81\x91", "1"); else if (ch == '\x82' && ch2 == '\x92') zData.replace("\x82\x92", "2"); ... ... ... } [/Code] But the result is something like this: " Item $5.69 " Notice how this no longer matches my expectation: the lenght is now 17 (instead of 20) due to the 3 conversions and the decimal is now at index 13 (instead of 14) due to the conversion of the "5" before the decimal point. Ideally I would like to convert the string to a normal readable format keeping the constants (length, index of text, index of decimal) at the same place (so the rest of my application is re-usable) ... or any other suggestion (I'm pretty much stuck with this)... Is there a STANDARD way of dealing with these type of characters? Any help would be greatly appreciated, I've been stuck on this for a while now ... Thanks,

    Read the article

  • Facebook Tagging friends to the picture

    - by Rajesh Dante
    Below code tag only first uid then then its shows Fatal error: Uncaught OAuthException: (#100) Invalid parameter and can i use exact location for tagging.. as in below code x and y values are in pixel $facebook = new Facebook ( array ( 'appId' => FBAPPID, 'secret' => FBSECRETID ) ); $facebook->setFileUploadSupport ( true ); if (isset ( $_POST ['image'] ) && isset ( $_POST ['tname'] )) { $path_to_image = encrypt::instance ()->decode ( $_POST ['image'] ); $tags = (array)encrypt::instance ()->decode ( $_POST ['tname'] ); /* * Output $tags = array ( 0 => '[{"tag_uid":"100001083191675","x":100,"y":100},{"tag_uid":"100001713817872","x":100,"y":230},{"tag_uid":"100000949945144","x":100,"y":360},{"tag_uid":"100001427144227","x":230,"y":100},{"tag_uid":"100000643504257","x":230,"y":230},{"tag_uid":"100001155130231","x":230,"y":360}]' ); */ $args = array ( 'message' => 'Von ', 'source' => '@' . $path_to_image, 'access_token' => $this->user->fbtoken ) ; $photo = $facebook->api ( $this->user->data->fbid . '/photos', 'post', $args ); // upload works but not tags if (is_array ( $photo ) && ! empty ( $photo ['id'] )) { echo 'Photo uploaded. Check it on Graph API Explorer. ID: ' . $photo ['id']; foreach ( $tags as $key => $t ) { $tagRe = json_encode ( $t ); $args = array ( 'tags' => $tagRe, 'access_token' => $this->user->fbtoken ); $facebook->api ( '/' . $photo ['id'] . '/tags', 'post', $args ); } } }

    Read the article

  • Multiple exports with MEF does some really heinous stuff -- why, and why is it allowed?

    - by Dave
    I have an interesting situation where I need to do something like this: [Export[typeof(ICandy1)] [Export[typeof(ICandy2)] public class Candy : ICandy2 { ... } where public interface ICandy1 { ... } public interface ICandy2 : ICandy1 { ... } I couldn't find any posts anywhere regarding using multiple [Export] attributes, so I figured, what the hell, might as well try it. At first glance, it actually seemed to work. I have a couple of methods that call into both interfaces of a Candy instance, and it was fine. However, as I started to test the app, I saw that the behavior wasn't right, and when looking at the Output window, I saw that I was getting tons of COMExceptions. I couldn't track down where they were all coming from, but they always occurred when a worker thread was sleeping. I figured that it had to be from the main thread, then, but didn't know how to debug this at all. Nothing should have been going on in the GUI, and I disabled my DispatchTimers just in case -- same thing. Even more strange than the COMExceptions was the really, really erratic behavior when stepping through code. About 30% of the time, when I single stepped, it would pop out of the method, or it would single step over two lines of code! Totally weird stuff that I am not used to seeing. The only thing that changed between working and non-working code was the introduction of MEF through my plugin loading code. So as a test, I changed my plugin assembly to only export one interface, and I hardcoded everything in the app that relied on the other (now not-implemented) interface. And now the COMExceptions are gone, and the weird debugging behavior is gone. Is this something people here have seen before? If MEF is not expected to allow a class to Export multiple interfaces, then shouldn't a CompositionException get raised when composing the parts? Can anyone explain why MEF would cause these weird problems???

    Read the article

  • df-h command in ubuntu

    - by Esha Sharma
    I am a new user of Ubuntu. When I type df -h in terminal , it gives me list of all storage devices and space usage. In my system I get this. Filesystem Size Used Avail Use% Mounted on /cow 934M 173M 761M 19% / udev 925M 4.0K 925M 1% /dev tmpfs 374M 856K 373M 1% /run /dev/sdb1 7.5G 2.8G 4.8G 37% /cdrom /dev/loop0 1.5G 1.5G 0 100% /rofs tmpfs 934M 16K 934M 1% /tmp none 5.0M 0 5.0M 0% /run/lock none 934M 76K 934M 1% /run/shm /dev/sda 299G 74M 299G 1% /media/q I understand that /dev/sda is my hard drive which is 320 gb(in gib it is 299 and hopefully that is what is being displayed) and /dev/sdb1 is pendrive of 8gb from which I am running the live cd. My question is what are the other folders and what is the physical location of these folders if complete memory is taken by the device dev/sda?

    Read the article

  • Setting up magic routes for plugins in CakePHP 1.3?

    - by Matt Huggins
    I'm working on upgrading my project from CakePHP 1.2 to 1.3. In the process, it seems that the "magic" routing for plugins by which a controller name (e.g.: "ForumsController") matching the plugin name (e.g.: "forums") no longer automatically routes to the root of the plugin URL (e.g.: "www.example.com/forums" pointing to plugin "forums", controller "forums", action "index"). The error message given is as follows: Error: ForumsController could not be found. Error: Create the class ForumsController below in file: app/controllers/forums_controller.php <?php class ForumsController extends AppController { var $name = 'Forums'; } ?> In fact, even if I navigate to "www.example.com/forums/forums" or "www.example.com/forums/forums/index", I get the same exact error. Do I need to explicitly set up routes to every single plugin I use? This seems to destroy a lot of the magic I like about CakePHP. I've only found that doing the following works: Router::connect('/forums/:action/*', array('plugin' => 'forums', 'controller' => 'forums')); Router::connect('/forums', array('plugin' => 'forums', 'controller' => 'forums', 'action' => 'index')); Setting up 2 routes for every single plugin seems like overkill, does it not? Is there a better solution that will cover all my plugins, or at least reduce the number of routes I need to set up for each plugin?

    Read the article

  • media.set_xx giving me grief!

    - by Firas
    New guy here. I asked a while back about a sprite recolouring program that I was having difficulty with and got some great responses. Basically, I tried to write a program that would recolour pixels of all the pictures in a given folder from one given colour to another. I believe I have it down, but, now the program is telling me that I have an invalid value specified for the red component of my colour. (ValueError: Invalid red value specified.), even though it's only being changed from 64 to 56. Any help on the matter would be appreciated! (Here's the code, in case I messed up somewhere else; It's in Python): import os import media import sys def recolour(old, new, folder): old_list = old.split(' ') new_list = new.split(' ') folder_location = os.path.join('C:\', 'Users', 'Owner', 'Spriting', folder) for filename in os.listdir (folder): current_file = media.load_picture(folder_location + '\\' + filename) for pix in current_file: if (media.get_red(pix) == int(old_list[0])) and \ (media.get_green(pix) == int(old_list[1])) and \ (media.get_blue(pix) == int(old_list[2])): media.set_red(pix, new_list[0]) media.set_green(pix, new_list[1]) media.set_blue(pix, new_list[2]) media.save(pic) if name == 'main': while 1: old = str(raw_input('Please insert the original RGB component, separated by a single space: ')) if old == 'quit': sys.exit(0) new = str(raw_input('Please insert the new RGB component, separated by a single space: ')) if new == 'quit': sys.exit(0) folder = str(raw_input('Please insert the name of the folder you wish to modify: ')) if folder == 'quit': sys.exit(0) else: recolour(old, new, folder)

    Read the article

  • Realtime file-level mirroring from local NTFS to network drive

    - by hurfdurf
    We have some data collection machines running WinXP. After a new file is written, we would like to immediately copy the new file to network storage (a NetApp CIFS share) automagically. We need realtime or near realtime copies generated (copy upon filehandle close would be fine -- these are not long-running system logs). Two commercial applications I've found so far are MirrorFile and IBM's Tivoli CDP. Are there any reliable open source programs or simple ways to get Shadow Copy to do something similar? Bonus points if it runs as a service.

    Read the article

  • Tree-like queues

    - by Rehno Lindeque
    I'm implementing a interpreter-like project for which I need a strange little scheduling queue. Since I'd like to try and avoid wheel-reinvention I was hoping someone could give me references to a similar structure or existing work. I know I can simply instantiate multiple queues as I go along, I'm just looking for some perspective by other people who might have better ideas than me ;) I envision that it might work something like this: The structure is a tree with a single root. You get a kind of "insert_iterator" to the root and then push elements onto it (e.g. a and b in the example below). However, at any point you can also split the iterator into multiple iterators, effectively creating branches. The branches cannot merge into a single queue again, but you can start popping elements from the front of the queue (again, using a kind of "visitor_iterator") until empty branches can be discarded (at your discretion). x -> y -> z a -> b -> { g -> h -> i -> j } f -> b Any ideas? Seems like a relatively simple structure to implement myself using a pool of circular buffers but I'm following the "think first, code later" strategy :) Thanks

    Read the article

  • Using Memcached in Python/Django - questions.

    - by Thomas
    I am starting use Memcached to make my website faster. For constant data in my database I use this: from django.core.cache import cache cache_key = 'regions' regions = cache.get(cache_key) if result is None: """Not Found in Cache""" regions = Regions.objects.all() cache.set(cache_key, regions, 2592000) #(2592000sekund = 30 dni) return regions For seldom changed data I use signals: from django.core.cache import cache from django.db.models import signals def nuke_social_network_cache(self, instance, **kwargs): cache_key = 'networks_for_%s' % (self.instance.user_id,) cache.delete(cache_key) signals.post_save.connect(nuke_social_network_cache, sender=SocialNetworkProfile) signals.post_delete.connect(nuke_social_network_cache, sender=SocialNetworkProfile) Is it correct way? I installed django-memcached-0.1.2, which show me: Memcached Server Stats Server Keys Hits Gets Hit_Rate Traffic_In Traffic_Out Usage Uptime 127.0.0.1 15 220 276 79% 83.1 KB 364.1 KB 18.4 KB 22:21:25 Can sombody explain what columns means? And last question. I have templates where I am getting much records from a few table (relationships). So in my view I get records from one table and in templates show it and related info from others. Generating page last a few seconds for very small table (<100records). Is it some easy way to cache queries from templates? Have I to do some big structure in my view (with all related tables), cache it and send to template?

    Read the article

  • SQLAlchemy, one to many vs many to one

    - by sadvaw
    Dear Everyone, I have the following data: CREATE TABLE `groups` ( `bookID` INT NOT NULL, `groupID` INT NOT NULL, PRIMARY KEY(`bookID`), KEY( `groupID`) ); and a book table which basically has books( bookID, name, ... ), but WITHOUT groupID. There is no way for me to determine what the groupID is at the time of the insert for books. I want to do this in sqlalchemy. Hence I tried mapping Book to the books joined with groups on book.bookID=groups.bookID. I made the following: tb_groups = Table( 'groups', metadata, Column('bookID', Integer, ForeignKey('books.bookID'), primary_key=True ), Column('groupID', Integer), ) tb_books = Table( 'books', metadata, Column('bookID', Integer, primary_key=True), tb_joinedBookGroup = sql.join( tb_books, tb_groups, \ tb_books.c.bookID == tb_groups.c.bookID) and defined the following mapper: mapper( Group, tb_groups, properties={ 'books': relation(Book, backref='group') }) mapper( Book, tb_joinedBookGroup ) ... However, when I execute this piece of code, I realized that each book object has a field groups, which is a list, and each group object has books field which is a singular assigment. I think my definition here must have been causing sqlalchemy to be confused about the many-to-one vs one-to-many relationship. Can someone help me sort this out? My desired goal is g.books = [b, b, b, .. ] book.group = g, where g is an instance of group, and b is an instance of book

    Read the article

  • Integrated Windows authentication in IIS causing ADO.NET failure

    - by TrueWill
    We have a .NET 3.5 Web Service running under IIS. It must use identity impersonate="true" and Integrated Windows authentication in order to authenticate to third-party software. In addition, it connects to a SQL Server database using ADO.NET and SQL Server Authentication (specifying a fixed User ID and Password in the connection string). Everything worked fine until the database was moved to another SQL Server. Then the Web Service would throw the following exception: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) This error only occurs if identity impersonate is true in the Web.config. Again, the connection string hasn't changed and it specifies the user. I have tested the connection string and it works, both under the impersonated account and under the service account (and from both the remote machine and the server). What needs to be changed to get this to work with impersonation?

    Read the article

  • drupal themes: how do I include several css files / js files on my theme's .info file?

    - by egarcia
    I'm creating a new Drupal theme. Until now, I only needed to include a single css file and a single js file. So my theme.info file had something like this: stylesheets[all][] = css/style.css scripts[] = js/script.js Now I must include jquery and jquery-ui in order to use a calendar date. These come with 2 new javascript files, and 1 additonal css file that I must add to the site. The calendar input form is going to be used in all pages (on a side block) so it is ok for me to load the extra css/javascript on all pages. I think the easiest thing would be to reference them on the .info file itself. At first I tried to just put them there with separate spaces: stylesheets[all][] = css/style.css css/ui-lightness/jquery-ui-1.8.1.custom.css scripts[] = js/script.js js/jquery-1.4.2.min.js js/jquery-ui-1.8.1.custom.min.js I emptied drupal's cache and... none of them loaded. I then tried separating each file with a comma, and flushing the cache again. Same result. I've browsed some drupal pages, but could not find how to add several javascript/css files on one theme (they always seem to add just 1 of each). So, how do I include several css/javascript files on the .info file?

    Read the article

  • trouble calculating offset index into 3D array

    - by Derek
    Hello, I am writing a CUDA kernel to create a 3x3 covariance matrix for each location in the rows*cols main matrix. So that 3D matrix is rows*cols*9 in size, which i allocated in a single malloc accordingly. I need to access this in a single index value the 9 values of the 3x3 covariance matrix get their values set according to the appropriate row r and column c from some other 2D arrays. In other words - I need to calculate the appropriate index to access the 9 elements of the 3x3 covariance matrix, as well as the row and column offset of the 2D matrices that are inputs to the value, as well as the appropriate index for the storage array. i have tried to simplify it down to the following: //I am calling this kernel with 1D blocks who are 512 cols x 1row. TILE_WIDTH=512 int bx = blockIdx.x; int by = blockIdx.y; int tx = threadIdx.x; int ty = threadIdx.y; int r = by + ty; int c = bx*TILE_WIDTH + tx; int offset = r*cols+c; int ndx = r*cols*rows + c*cols; if((r < rows) && (c < cols)){ //this IF statement is trying to avoid the case where a threadblock went bigger than my original array..not sure if correct d_cov[ndx + 0] = otherArray[offset]; d_cov[ndx + 1] = otherArray[offset] d_cov[ndx + 2] = otherArray[offset] d_cov[ndx + 3] = otherArray[offset] d_cov[ndx + 4] = otherArray[offset] d_cov[ndx + 5] = otherArray[offset] d_cov[ndx + 6] = otherArray[offset] d_cov[ndx + 7] = otherArray[offset] d_cov[ndx + 8] = otherArray[offset] } When I check this array with the values calculated on the CPU, which loops over i=rows, j=cols, k = 1..9 The results do not match up. in other words d_cov[i*rows*cols + j*cols + k] != correctAnswer[i][j][k] Can anyone give me any tips on how to sovle this problem? Is it an indexing problem, or some other logic error?

    Read the article

  • Need opinions on LaTeX and ever upgrading

    - by yCalleecharan
    Hi, I've been using LaTeX since 2005 with the TeXLive distribution and I've been upgrading as each new TeXLive distribution comes out. In the recent years I noticed an increase in new packages, updated packages and in one instance a new package bearing a different name replacing an old one by the same package author. A LaTeX document which relies heavily on packages and which has been produced a few years back may start to get some warnings and error messages on present-day LaTeX compilation. The primary reason I switched to LaTeX is because of its reliability and robustness to create big documents easily, not to mention the adorable typographic quality. With LaTeX one doesn't have to worry about how to open a docx in an old program supporting only doc for instance. Now, when there are so much continual changes in the packages in a LaTeX distribution, I tend to wonder when will this madness end. Not that having enhanced and new features are bad in packages, but not all updated packages are backward compatible. Eventually one would like to be able to compile a LaTeX file in 10 years time that he/she is working on at present and not get any compilation warnings/error messages due to some unpredictable behavior of updated packages or due to a package that has been cast-off from a LaTeX distribution. If I understand correctly CTAN do keep a database with all packages from different versions. I would like to know how you LaTeX users handle this issue. Thanks a lot...

    Read the article

  • Actual High Speed USB flash drive

    - by CSkau
    I'm looking to upgrade my EEE 1000H by possibly replacing the HDD with simple (internal) usb connected storage. The problem I'm having now is that I can't seem to find any actual high speed usb sticks. They all proclaim high speeds, but usually turn out to be ~30 mb/s - much lower than the 60 mb/s (480 mbit/s / 8 ) I understand USB 2.0 is at - no ? Can anyone enlighten me as to why no USB sticks seem to go past that low bar or alternatively point me in the direction of some actual high speed usb sticks ? Any help is greatly appreciated :) Cheers!

    Read the article

  • using yield in C# like I would in Ruby

    - by Sarah Vessels
    Besides just using yield for iterators in Ruby, I also use it to pass control briefly back to the caller before resuming control in the called method. What I want to do in C# is similar. In a test class, I want to get a connection instance, create another variable instance that uses that connection, then pass the variable to the calling method so it can be fiddled with. I then want control to return to the called method so that the connection can be disposed. I guess I'm wanting a block/closure like in Ruby. Here's the general idea: private static MyThing getThing() { using (var connection = new Connection()) { yield return new MyThing(connection); } } [TestMethod] public void MyTest1() { // call getThing(), use yielded MyThing, control returns to getThing() // for disposal } [TestMethod] public void MyTest2() { // call getThing(), use yielded MyThing, control returns to getThing() // for disposal } ... This doesn't work in C#; ReSharper tells me that the body of getThing cannot be an iterator block because MyThing is not an iterator interface type. That's definitely true, but I don't want to iterate through some list. I'm guessing I shouldn't use yield if I'm not working with iterators. Any idea how I can achieve this block/closure thing in C# so I don't have to wrap my code in MyTest1, MyTest2, ... with the code in getThing()'s body?

    Read the article

  • Can a S3 mount be used as the document root for Apache?

    - by Hesse
    Has anyone been successful in having their DocumentRoot reside on an S3 mount (using s3fs)? I currently have a mounted bucket at /mnt/s3. I can read and write files to it no problem. In my httpd.conf I have DocumentRoot "/mnt/s3". When I restart Apache I get the error "DocumentRoot must be a directory". Has anyone tried something similar. My goal is to have a shared storage space so my nodes can scale easily and access the same document root. Thanks

    Read the article

  • OOP C# Question: Making a Fruit a Pear

    - by Adam Kane
    Given that I have an instance of Fruit with some properties set, and I want to get those properties into a new Pear instance (because this particular Fruit happens to have the qualities of a pear), what's the best way to achieve this effect? For example, what we can't do is simple cast a Fruit to a Pear, because not all Fruits are Pears: public static class PearGenerator { public static Pear CreatePear () { // Make a new generic fruit. Fruit genericFruit = new Fruit(); // Upcast it to a pear. (Throws exception: Can't cast a Fruit to a Pear.) Pear pear = (Pear)genericFruit; // Return freshly grown pear. return ( pear ); } } public class Fruit { // some code } public class Pear : Fruit { public void PutInPie () { // some code } } Thanks! Update: I don't control the "new Fruit()" code. My starting point is that I've got a Fruit to work with. I need to get that Fruit into a new Pear somehow. Maybe copy all the properties one by one?

    Read the article

  • Sharing / replicating EBS across AWS nodes

    - by skrat
    I would like to use single EBS storage across multiple EC2 nodes (web/app servers). I've read some articles on snapshot sharing, but that doesn't suit well for what we need. We use filesystem for storing DB record attachments, so if one such attachment gets created, we need it to be immediately available to all nodes (to serve). So far only NFS seem to be viable, but it's a pain to configure and maintain. Another option could be storing those attachments on S3 instead, but that would cut us of doing any analysis on that data. This must be quite common problem when scaling in AWS, what solutions are there?

    Read the article

  • tk: how to invoke it just to display something, and return to the main program?

    - by max
    Sorry for the noob question but I really don't understand this. I'm using python / tkinter and I want to display something (say, a canvas with a few shapes on it), and keep it displayed until the program quits. I understand that no widgets would be displayed until I call tkinter.tk.mainloop(). However, if I call tkinter.tk.mainloop(), I won't be able to do anything else until the user closes the main window. I don't need to monitor any user input events, just display some stuff. What's a good way to do this without giving up control to mainloop? EDIT: Is this sample code reasonable: class App(tk.Tk): def __init__(self, sim): self.sim = sim # link to the simulation instance self.loop() def loop(): self.redraw() # update all the GUI to reflect new simulation state sim.next_step() # advance simulation another step self.after(0, self.loop) def redraw(): # get whatever we need from self.sim, and put it on the screen EDIT2 (added after_idle): class App(tk.Tk): def __init__(self, sim): self.sim = sim # link to the simulation instance self.after_idle(self.preloop) def preloop(): self.after(0, self.loop) def loop(): self.redraw() # update all the GUI to reflect new simulation state sim.next_step() # advance simulation another step self.after_idle(self.preloop) def redraw(): # get whatever we need from self.sim, and put it on the screen

    Read the article

  • database design - empty fields

    - by imanc
    Hey, I am currently debating an issue with a guy on my dev team. He believes that empty fields are bad news. For instance, if we have a customer details table that stores data for customers from different countries, and each country has a slightly different address configuration - plus 1-2 extra fields, e.g. French customer details may also store details for entry code, and floor/level plus title fields (madamme, etc.). South Africa would have a security number. And so on. Given that we're talking about minor variances my idea is to put all of the fields into the table and use what is needed on each form. My colleague believes we should have a separate table with extra data. E.g. customer_info_fr. But this seams to totally defeat the purpose of a combined table in the first place. His argument is that empty fields / columns is bad - but I'm struggling to find justification in terms of database design principles for or against this argument and preferred solutions. Another option is a separate mini EAV table that stores extra data with parent_id, key, val fields. Or to serialise extra data into an extra_data column in the main customer_data table. I think I am confused because what I'm discussing is not covered by 3NF which is what I would typically use as a reference for how to structure data. So my question specifically: - if you have slight variances in data for each record (1-2 different fields for instance) what is the best way to proceed?

    Read the article

  • In an Android application, should I have one content provider per table or only one for the entire a

    - by Andrew Dyer
    I have years of experience with Microsoft .NET development (primarily C#) and have been working to come up to speed on Android and Java. So far, I've built a small application with a couple screens and a working content provider. All of the examples I've seen for developing content providers typically work with a single table, so I got the impression that this was the convention. I built a couple more content providers for other tables and ran into the "Unknown URI" IllegalArgumentException when I tried to test them. The exception is being thrown by one of my content providers, but not the one I was intending to call. It appears that my application is using the first content provider in the AndroidManifest.xml file, which now has me wondering if I should only have a single content provider for the entire application. Are there any best practices and/or examples for working with multiple tables in an Android application? Should I have one content provider per table or only one for the entire application? If the former, how do I resolve URIs to the proper provider? If the latter, how do I keep my content provider code from being polluted with switch statements?

    Read the article

< Previous Page | 447 448 449 450 451 452 453 454 455 456 457 458  | Next Page >