Search Results

Search found 8687 results on 348 pages for 'per akerberg'.

Page 127/348 | < Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >

  • Is there any reason for an object pool to not be treated as a singleton?

    - by Chris Charabaruk
    I don't necessarily mean implemented using the singleton pattern, but rather, only having and using one instance of a pool. I don't like the idea of having just one pool (or one per pooled type). However, I can't really come up with any concrete situations where there's an advantage to multiple pools for mutable types, at least not any where a single pool can function just as well. What advantages are there to having multiple pools over a singleton pool?

    Read the article

  • Shake to open view modally

    - by Vivas
    Hi, I have my 'shake' working fine (using motionEnded), based off of Apple's GLPaint code. When the user shakes the device (running 3.0 and up) I want to open a view controller modally using presentModalViewController. In my appdelegate I have the notification (as per the GLPaint sample code): [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(shakeToOpenHiddenScreen) name:@"shake" object:nil]; In my shakeToOpenHiddenScreen I just want to open view 'x' modally but I don't think that my appdelegate will respond to presentModalViewController. Is there a way around this?

    Read the article

  • ASP.NET Membership and Roles separation relationship

    - by Saif Khan
    Hi, I have an ASP.NET project where I want to keep the membership (SQL Provider) in a separate database and the Roles/Profiles will be per application. Question What is the KEY that relates between the Membership database and the Roles/Profile database? Is it the UserID or UserName? I opened up the tables in separate expolrer and notice the UserID is different in the Membership database from that in the application Roles database.

    Read the article

  • Non-global middleware in Django

    - by hekevintran
    In Django there is a settings file that defines the middleware to be run on each request. This middleware setting is global. Is there a way to specify a set of middleware on a per-view basis? I want to have specific urls use a set of middleware different from the global set.

    Read the article

  • Scanner Daily Duty Cycle

    - by juanp
    I'm comfused with the concept of 'Daily Duty Cycle'. For example if I have a scanner that the spec is: PPM (pages per minute): 90 and DDC (Daily Duty Cycle): 800. It means that in one day it will be able to scan only 800 pages?

    Read the article

  • lua table C api

    - by anon
    I know of: http://lua-users.org/wiki/SimpleLuaApiExample It shows me how to build up a table (key, value) pair entry by entry. Suppose instead, I want to build a gigantic table (say something a 1000 entry table, where both key & value are strings), is there a fast way to do this in lua (rather than 4 func calls per entry: push key value rawset Thanks!

    Read the article

  • How to configure the 5554:WVGA800H model in android

    - by siva
    HI Can any one help me out in Configuring the 5554:WVGA800H model in android,as per this link http://developer.android.com/guide/developing/tools/emulator.html#emulatornetworking they have given the screen for the TABLET ,can any one guide me in this? Thanks & Regards P.Sivasankar

    Read the article

  • Dynamic data-entry value store

    - by simendsjo
    I'm creating a data-entry application where users are allowed to create the entry schema. My first version of this just created a single table per entry schema with each entry spanning a single or multiple columns (for complex types) with the appropriate data type. This allowed for "fast" querying (on small datasets as I didn't index all columns) and simple synchronization where the data-entry was distributed on several databases. I'm not quite happy with this solution though; the only positive thing is the simplicity... I can only store a fixed number of columns. I need to create indexes on all columns. I need to recreate the table on schema changes. Some of my key design criterias are: Very fast querying (Using a simple domain specific query language) Writes doesn't have to be fast Many concurrent users Schemas will change often Schemas might contain many thousand columns The data-entries might be distributed and needs syncronization. Preferable MySQL and SQLite - Databases like DB2 and Oracle is out of the question. Using .Net/Mono I've been thinking of a couple of possible designs, but none of them seems like a good choice. Solution 1: Union like table containing a Type column and one nullable column per type. This avoids joins, but will definitly use a lot of space. Solution 2: Key/value store. All values are stored as string and converted when needed. Also use a lot of space, and of course, I hate having to convert everything to string. Solution 3: Use an xml database or store values as xml. Without any experience I would think this is quite slow (at least for the relational model unless there is some very good xpath support). I also would like to avoid an xml database as other parts of the application fits better as a relational model, and being able to join the data is helpful. I cannot help to think that someone has solved (some of) this already, but I'm unable to find anything. Not quite sure what to search for either... I know market research is doing something like this for their questionnaires, but there are few open source implementations, and the ones I've found doesn't quite fit the bill. PSPP has much of the logic I'm thinking of; primitive column types, many columns, many rows, fast querying and merging. Too bad it doesn't work against a database.. And of course... I don't need 99% of the provided functionality, but a lot of stuff not included. I'm not sure this is the right place to ask such a design related question, but I hope someone here has some tips, know of any existing work, or can point me to a better place to ask such a question. Thanks in advance!

    Read the article

  • Data Quality Check - SQL Server

    - by user319384
    I am trying to find a good mechanism where I can check whether the data being entered by a group of people is grammatically correct, has correct spellings, etc, etc. I also would like to compute words per minute and accuracy. Is there any process to do this so that I do not have to re-invent the wheel? Thanks in advance.

    Read the article

  • What is "Memory Page out Rate"

    - by Tuxist
    Could somebody please tell me what is "Memory Page Out Rate". I have seen this in "HP Open View" server monitoring tool and tried googling it. Would appreciate if some expert can clarify. If page out rate is too high as 200+ per second, can it crash the server? Thanks in advance

    Read the article

  • statistics service recomendation

    - by MichaelD
    does anyone know of a good statistics service for a widget I'm developing? my requirements are 1.have the ability to get hundreds of thousands of events per day. 2.API to get results and registering events. 3.near real time results. thnx michael

    Read the article

  • Hidden features of HTTP

    - by Gumbo
    What hidden features of HTTP do you think are worth mentioning? By hidden features I mean features that already are part of the standard but widely rather unknown or unused. Just one feature per answer please.

    Read the article

  • Qt as a true multi-platform dev-env

    - by ruralcoder
    Inspired by the maturity problems I am facing porting on Mono Mac & Linux. I am investigating the use of Qt as an alternative. I am curious to hear about your favorite Qt experiences, tips or lesser known but useful features you know of. Please, include only one experience per answer.

    Read the article

  • Why do browsers use my saved password for all forms in the one site?

    - by user313272
    Is there a way to limit the url of saved credentials in browsers? For example, if I save a username and password for http://www.website.com/login can I make it so that the rest of the forms in the site don't use these details? http://www.website.com/members, http://www.website.com/admin etc... I'm aware of the autocomplete attribute but I don't want to turn off autocomplete entirely. I would like it if the browser remembered the login details per form or url.

    Read the article

  • Pruning data for better viewing on loglog graph - Matlab

    - by Geodesic
    Hi Guys, just wondering if anyone has any ideas about an issue I'm having. I have a fair amount of data that needs to be displayed on one graph. Two theoretical lines that are bold and solid are displayed on top, then 10 experimental data sets that converge to these lines are graphed, each using a different identifier (eg the + or o or a square etc). These graphs are on a log scale that goes up to 1e6. The first few decades of the graph (< 1e3) look fine, but as all the datasets converge ( 1e3) it's really difficult to see what data is what. There's over 1000 data points points per decade which I can prune linearly to an extent, but if I do this too much the lower end of the graph will suffer in resolution. What I'd like to do is prune logarithmically, strongest at the high end, working back to 0. My question is: how can I get a logarithmically scaled index vector rather than a linear one? My initial assumption was that as my data is lenear I could just use a linear index to prune, which lead to something like this (but for all decades): //%grab indicies per decade ind12 = find(y >= 1e1 & y <= 1e2); indlow = find(y < 1e2); indhigh = find(y > 1e4); ind23 = find(y >+ 1e2 & y <= 1e3); ind34 = find(y >+ 1e3 & y <= 1e4); //%We want ind12 indexes in this decade, find spacing tot23 = round(length(ind23)/length(ind12)); tot34 = round(length(ind34)/length(ind12)); //%grab ones to keep ind23keep = ind23(1):tot23:ind23(end); ind34keep = ind34(1):tot34:ind34(end); indnew = [indlow' ind23keep ind34keep indhigh']; loglog(x(indnew), y(indnew)); But this causes the prune to behave in a jumpy fashion obviously. Each decade has the number of points that I'd like, but as it's a linear distribution, the points tend to be clumped at the high end of the decade on the log scale. Any ideas on how I can do this?

    Read the article

  • Python (Django). Store telnet connection

    - by Shamanu4
    Hello. I am programming web interface which communicates with cisco switches via telnet. I want to make such system which will be storing one telnet connection per switch and every script (web interface, cron jobs, etc.) will have access to it. This is needed to make a single query queue for each device and prevent huge cisco processor load caused by several concurent telnet connections. How do I can do this?

    Read the article

  • Recommended NetBeans UML plugins

    - by Thomas Owens
    It appears that the NetBeans UML plugin has been discontinued, as per a discussion on the NetBeans forums. This was a great, free tool with nice model-code and code-model generation. There are a number of other UML NetBeans plugins out there. However, I've never used any of them. Any suggestions?

    Read the article

  • Rails: three most recent comments for unique users

    - by Dennis Collective
    class User has_many :comments end class Comment belongs_to :user named_scope :recent, :order => 'comments.created_at DESC' named_scope :limit, lambda { |limit| {:limit => limit}} named_scope :by_unique_users end what would I put in the :by_unique_users so that I can do Comment.recent.by_unique_users.limit(3), and only get one comment per user

    Read the article

  • Optimising speeds in HDF5 using Pytables

    - by Sree Aurovindh
    The problem is with respect to the writing speed of the computer (10 * 32 bit machine) and the postgresql query performance.I will explain the scenario in detail. I have data about 80 Gb (along with approprite database indexes in place). I am trying to read it from Postgresql database and writing it into HDF5 using Pytables.I have 1 table and 5 variable arrays in one hdf5 file.The implementation of Hdf5 is not multithreaded or enabled for symmetric multi processing.I have rented about 10 computers for a day and trying to write them inorder to speed up my data handling. As for as the postgresql table is concerned the overall record size is 140 million and I have 5 primary- foreign key referring tables.I am not using joins as it is not scalable So for a single lookup i do 6 lookup without joins and write them into hdf5 format. For each lookup i do 6 inserts into each of the table and its corresponding arrays. The queries are really simple select * from x.train where tr_id=1 (primary key & indexed) select q_t from x.qt where q_id=2 (non-primary key but indexed) (similarly five queries) Each computer writes two hdf5 files and hence the total count comes around 20 files. Some Calculations and statistics: Total number of records : 14,37,00,000 Total number of records per file : 143700000/20 =71,85,000 The total number of records in each file : 71,85,000 * 5 = 3,59,25,000 Current Postgresql database config : My current Machine : 8GB RAM with i7 2nd generation Processor. I made changes to the following to postgresql configuration file : shared_buffers : 2 GB effective_cache_size : 4 GB Note on current performance: I have run it for about ten hours and the performance is as follows: The total number of records written for each file is about 6,21,000 * 5 = 31,05,000 The bottle neck is that i can only rent it for 10 hours per day (overnight) and if it processes in this speed it will take about 11 days which is too high for my experiments. Please suggest me on how to improve. Questions: 1. Should i use Symmetric multi processing on those desktops(it has 2 cores with about 2 GB of RAM).In that case what is suggested or prefereable? 2. If i change my postgresql configuration file and increase the RAM will it enhance my process. 3. Should i use multi threading.. In that case any links or pointers would be of great help Thanks Sree aurovindh V

    Read the article

  • Using GhostScript to get page size

    - by Aristoteles
    Is it possible to get the page size (from e.g. a PDF document page) using GhostScript? I have seen the "bbox" device, but it returns the bounding box (it differs per page), not the TrimBox (or CropBox) of the PDF pages. (See http://www.prepressure.com/pdf/basics/page_boxes for info about page boxes.) Any other possibility?

    Read the article

< Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >