Search Results

Search found 23098 results on 924 pages for 'multiple processes'.

Page 696/924 | < Previous Page | 692 693 694 695 696 697 698 699 700 701 702 703  | Next Page >

  • alt-tab sort of web design

    - by sureshgl
    I am thinking of designing a web site having multiple related services. For every action of the user in a service there will be some computation going on in each of the other services. I want to display the service in action (chosen by the user) in the middle of the page in enlarged mode and rest of the services as small sized (shrunk) versions around the enlarged mode. The services which are shown in the shrunk version, should actually show what is happening in that service, at run time as if that was the chosen service. A close match to this sort of behavior I know is in Reliance BigTV, where all the small images of all the channels will be going on, and we can choose the one that we want to watch. After choosing that one image will become big and occupy the screen. Please, let me know if I can do some thing like this using css, html, ajax & php.

    Read the article

  • better way to write this

    - by ash34
    Hi, I have to create a hash of the form h[:bill] = ["Billy", "NA", 20, "PROJ_A"] by login where 20 is the cumulative number of hours reported by the login for all task transactions returned by the query where each login has multiple reported transactions. Did I do this in a bad way or this seems alright. h = Hash.new Task.find_each(:include => [:user], :joins => :user, :conditions => ["from_date >= ? AND from_date <= ? AND category = ?", Date.today - 30, Date.today + 30, 'PROJ1']) do |t| h[t.login.intern] = [t.user.name, 'NA', h[t.login.intern].nil? ? (t.hrs_per_day * t.num_days) : h[t.login.intern][2] + (t.hrs_day * t.workdays), t.category] end Also if I have to aggregate not just by login but login and category how do I accomplish this? thanks, ash

    Read the article

  • Why is this leaking memory? UIImage `cellForRowAtIndexPath:`

    - by Emil
    Hey. Instruments' Leaks tells me that this UIImage is leaking: UIImage *image = [[UIImage alloc] initWithContentsOfFile:[imagesPath stringByAppendingPathComponent:[NSString stringWithFormat:@"/%@.png", [postsArrayID objectAtIndex:indexPath.row]]]]; // If image contains anything, set cellImage to image. If image is empty, try one more time or use noImage.png, set in IB if (image != nil){ // If image != nil, set cellImage to that image cell.cellImage.image = image; } image = nil; [image release]; (class cell (custom table view cell) also releases cellImage in dealloc method). I haven't got a clue of why it's leaking, but it certainly is. The images gets loaded multiple times in a cellForRowAtIndexPath:-method. The first three cells' image does not leak (130px high, all the space avaliable). Leaks gives me no other info than that a UIImage allocated here in the code leaks. Can you help me figure it out? Thanks :)

    Read the article

  • Facebook: "URL could not be liked because it has been blocked" error in iOS app

    - by Andrew Madsen
    I'm working on an iOS app that allows the user to like a Facebook page within the app. I've implemented this using FacebookLikeView. During the course of testing this functionality, I've liked and unliked the same page multiple times. Unfortunately, this seems to have triggered Facebook's spam detection. Now, when trying to like a page using the like button displayed by FacebookLikeView, the following error is presented: "URL could not be liked because it has been blocked". Based on reports of the same problem found by searching the web, I've filled out this form to request that Facebook remove the block. However, I've received no response from them. I'm not sure how to proceed. Has anyone else run into this issue and successfully solved it?

    Read the article

  • Effective methods for reading and writing large files in C

    - by Bertholt Stutley Johnson
    I'm writing an application that deals with very large user-generated input files. The program will copy about 95 percent of the file, effectively duplicating it and switching a few words and values in the copy, and then appending the copy (in chunks) to the original file, such that each block (consisting of between 10 and 50 lines) in the original is followed by the copied and modified block, and then the next original block, and so on. The user-generated input conforms to a certain format, and it is highly unlikely that any line in the original file is longer than 100 characters long. Which would be the better approach? a) To use one file pointer and use variables that hold the current position of how much has been read and where to write to, seeking the file pointer back and forth to read and write; or b) To use multiple file pointers, one for reading and one for writing. I am mostly concerned with the efficiency of the program, as the input files will reach up to 25,000 lines, each about 50 characters long. Thanks!

    Read the article

  • Silverlight4 Printing Question

    - by hallgato.attila
    Hello everyone! I'm having a problem with printing. Is it possible to print a (customized) DataGrid which doesn't fit the page? a) Using a ViewBox to make it fit one page. My problem here is that I can get the PrintableArea.Width and PrintableArea.Height in the PrintPage EventHandler, from the PrintPageEventArgs, but in that Event Handler I can't set the size of an UIElement before it gets printed. b) Multiple pages. Is it possible somehow to print the whole DataGrid, even the parts which are not visible (don't fit the page) and to somehow make the PrintableArea only a part of the DataGrid?

    Read the article

  • Dynamic Database connection

    - by gsieranski
    I have multiple client databases that I need to hit on the fly and I am having trouble getting the code I have to work. At first I was just storing a connection string in a clinet object in the db, pulling it out based on logged-in user, and passing it to linq data context constructor. This works fine in my development enviorment but is failing on the Winhost server I am using. It is running SQL 2008. I am getting a "The current configuration system does not support user scoped settings." error. Any help or guidance on this issue is greatly appreciated. Greg

    Read the article

  • URLs with query stripped of ampersands appearing in error logs

    - by Jeremy DeGroot
    I've noticed a curious phenomena popping up in my error logs recently. If, as the result of processing a form, I redirect my users to the URL http://www.example.com/index.php?foo=bar&bar=baz, I will see the following two URLs in my log http://www.example.com/index.php?foo=barbar=baz http://www.example.com/index.php?foo=bar&bar=baz The first one is obviously incorrect and will cause my application to redirect to a 404. It always appears first, usually a second before the second one. The 404 page is not doing the redirection, so it appears that the browser is trying both versions. At first, looking at my server logs made me believe it affected only Firefox 3.6.3, but I've found an example of Safari being afflicted as well. It happens fairly intermittently, though it can occur multiple times in a users' session. I've never been able to get it to happen to me. Any thoughts as to the nature of the problem or a solution?

    Read the article

  • Rails - building an absolute url in a model's virtual attribute without url helper

    - by Nick
    I have a model that has paperclip attachments. The model might be used in multiple rails apps I need to return a full (non-relative) url to the attachment as part of a JSON API being consumed elsewhere. I'd like to abstract the paperclip aspect and have a simple virtual attribute like this: def thumbnail_url self.photo.url(:thumb) end This however only gives me the relative path. Since it's in the model I can't use the URL helper methods, right? What would be a good approach to prepending the application root url since I don't have helper support? I would like to avoid hardcoding something or adding code to the controller method that assembles my JSON. Thank you

    Read the article

  • Setting current culture with threads in ASP.NET MVC

    - by mare
    Here's an example of SetCulture attribute which inside does something like this: public void OnActionExecuting(ActionExecutingContext filterContext) { string cultureCode = SetCurrentLanguage(filterContext); if (string.IsNullOrEmpty(cultureCode)) return; HttpContext.Current.Response.Cookies.Add( new HttpCookie("Culture", cultureCode) { HttpOnly = true, Expires = DateTime.Now.AddYears(100) } ); filterContext.HttpContext.Session["Culture"] = cultureCode; CultureInfo culture = new CultureInfo(cultureCode); System.Threading.Thread.CurrentThread.CurrentCulture = culture; System.Threading.Thread.CurrentThread.CurrentUICulture = culture; } I was wondering how does this affect a site with multiple users logged on and each one setting their own culture? What is the scope of a thread here with regards to the IIS worker process (w3wp) that the site is running in?

    Read the article

  • nHibernate: Query tree nodes where self or ancestor matches condition

    - by Famous Nerd
    I have see a lot of competing theories about hierarchical queries in fluent-nHibernate or even basic nHibernate and how they're a difficult beast. Does anyone have any knowledge of good resources on the subject. I find myself needing to do queries similar to: (using a file system analog) select folderObjects from folders where folder.Permissions includes :myPermissionLevel or [any of my ancestors] includes :myPermissionLevel This is a one to many tree, no node has multiple parents. I'm not sure how to describe this in nHibernate specific terms or, even sql-terms. I've seen the phrase "nested sets" mentioned, is this applicable? I'm not sure. Can anyone offer any advice on approaches to writing this sort of nHibernate query?

    Read the article

  • what are the advantages of C# over Python

    - by Matt
    I like Python mostly for the great portability and the ease of coding, but I was wondering, what are some of the advantages that C# has over Python? The reason I ask is that one of my friends runs a private server for an online game (UO), and he offered to make me a dev if I wanted, but the software for the server is all written in C#. I'd love to do this, but I don't really have time to do multiple languages, and I was just after a few more reasons to justify taking C# over Python to myself. I'm doing this all self-taught as a hobby, btw

    Read the article

  • Many to Many DataSet Insert to Database using Stored Procedures

    - by Jonn
    I'm trying to do a many-to-many bulk insert to the database using a list of three different types of models. The only tools I can use are stored procedures and datasets. (But I can't use DataSet fills or DataAdapters, just plain ol' Stored Procedures). Is there an easy way to pass multiple values, let alone, pass three different lists and bulk insert them to a table in the database using stored procedures? (As specified in the title, the three tables are actually two tables and a third table to establish the many-to-many relationship between the two.)

    Read the article

  • How can I integrate graphs and JPEG images with RAVE Reports?

    - by JonDave of the Philippines
    I really love RAVE Reports in creating multiple reports especially with formulas and accounting systems... but recently I am having problems with integrating JPEG Pictures and Graphs with my newly Delphi Language developed Little ERP System. I bought some JPEG Components but it seems problematic. I also experienced some irregularities with my RAVE Reports now. When I run my program then try to preview some reports, it seems to be running fine, but when I close the program, the EXE file is still in the taskbar. I need to ctrl-alt-delete first for me to use Report Previews normally. If I don't, RAVE REPORT ERROR message will appear everytime I click the PRINT button; it says "STREAM READ ERROR" even though I used to "FreeAndNil" to free the memory stream when Report Preview Form closes. When I tried to run the My Applications without previewing RAVE reports, the program closes perfectly. Any suggestions and recommendations will be an enormous help. Thank you.

    Read the article

  • mysql 2 primary key onone table

    - by Bharanikumar
    CREATE TABLE Orders -> ( -> ID SMALLINT UNSIGNED NOT NULL, -> ModelID SMALLINT UNSIGNED NOT NULL, -> Descrip VARCHAR(40), -> PRIMARY KEY (ID, ModelID) -> ); Basically May i know ... Shall we create the two primary key on one table... Is it correct... Bcoz as per sql law,,, We can create N number of unque key in one table, and only one primary key only is the LAW know... Then how can my system allowing to create multiple primary key ? Please advise .... what is the general rule

    Read the article

  • Make xargs execute the command once for each line of input

    - by Readonly
    How can I make xargs execute the command exactly once for each line of input given? It's default behavior is to chunk the lines and execute the command once, passing multiple lines to each instance. From http://en.wikipedia.org/wiki/Xargs: find /path -type f -print0 | xargs -0 rm In this example, find feeds the input of xargs with a long list of file names. xargs then splits this list into sublists and calls rm once for every sublist. This is more efficient than this functionally equivalent version: find /path -type f -exec rm '{}' \; I know that find has the "exec" flag. I am just quoting an illustrative example from another resource.

    Read the article

  • checking if records exists in DB, in single step or 2 steps?

    - by Sinan
    Suppose you want to get a record from database which returns a large data and requires multiple joins. So my question would be is it better to use a single query to check if data exists and get the result if it exists. Or do a more simple query to check if data exists then id record exists, query once again to get the result knowing that it exists. Example: 3 tables a, b and ab(junction table) select * from from a, b, ab where condition and condition and condition and condition etc... or select id from a, b ab where condition then if exists do the query above. So I don't know if there is any reason to do the second. Any ideas how this affects DB performance or does it matter at all?

    Read the article

  • Search for Variable Usage In SSIS tasks

    - by yoni.s
    Hi all: As what seems to be some sort of penance for sins in a prior life, I have been tasked with maintaining some SSIS packages. (NO! NO BADMOUTHING SSIS!! BAD PROGRAMMMER! NO DOUGHNUT!). Anyhoo, I many of the packages have variables, defined in an outer container, which are used in multiple inner containers, in script tasks. What I want to do, is find out all the places in a package a variable being used; in other words, search for instances of variable usage in all tasks of a package. This would be a huge help, but I cannot for the life of me find out how this can be done in BIDS. (this is SSIS/BIDS 2008) Thanks for any help, YS

    Read the article

  • Load Balancing and Failover for Read-Only PostgreSQL Database

    - by Eric J.
    Scenario Multiple application servers host web services written in Java, running in SpringSource dm Server. To implement a new requirement, they will need to query a read-only PostgreSQL database. Issue To support redundancy, at least two PostgreSQL instances will be running. Access to PostgreSQL must be load balanced and must auto-fail over to currently running instances if an instance should go down. Auto-discovery of newly running instances is desirable but not required. Research I have reviewed the official PostgreSQL documentation on this issue. However, that focuses on the more general case of read/write access to the database. Top google results tend to lead to older newsgroup messages or dead projects such as Sequoia or DB Balancer, as well as one active project PG Pool II Question What are your real-world experiences with PG Pool II? What other simple and reliable alternatives are available?

    Read the article

  • Mac OS date command - getting higher resolution time

    - by Mark
    Hey all, I am trying to use the date command in Terminal on multiple Mac OS X machines that are synced via NTP to synchronize some code in a program. Essentially I am running a program... MyProgram with arguments[date] I can get date to give me the seconds since the Unix epoch with the %M specifier. When I try to use %N to get nanosecond resolution, date just returns N. Is there anyway to get date to give me finer then second resolution? I wouldn't even mind passing two arguments such as (date +%M):arg2 And then converting units in the program. Many thanks in advance! %N specifier listed here: http://en.wikipedia.org/wiki/Date_(Unix)

    Read the article

  • How to map to tables in database PHPMyAdmin

    - by thegrede
    I'm working now on a project which a user can save their own coupon codes on the websites, so I want to know what is the best to do that, Lets say, I have 1 table with the users, like this, userId | firstName | lastName | codeId and then I have a table of the coupon codes, like this, codeId | codeNumber So what I can do is to connect the codeId to userId so when someone saves the coupons goes the codeId from the coupon table into the codeId of the users table, But now what if when a user have multiple coupons what do I do it should be connected to the user? I have 2 options what to do, Option 1, Saving the codeId from coupons table into the codeId of users table like 1,2,3,4,5, Option 2 To make a new row into the coupons table and to connect the user to the code with adding another field in the coupon table userId and putting into it the user which has added the coupon his userId of the users table, So what of the two options is better to do? Thanks you guys.

    Read the article

  • What is design principle behind Servlets being Singleton

    - by Sandeep Jindal
    A servlet container "generally" create one instance of a servlet and different threads of the same instance to serve multiple requests. (I know this can be changed using deprecated SingleThreadModel and other features, but this is the usual way). I thought, the simple reason behind this is performance gain, as creating threads is better than creating instances. But it seems this is not the reason. On the other hand, creating instances have little advantage that developers never have to worry about thread safety. I am trying to understand the reason for this decision over the trade-off of thread-safety.

    Read the article

  • DEADLOCK_WRAP error when using Berkeley Db in python (bsddb)

    - by JiminyCricket
    I am using a berkdb to store a huge list of key-value pairs but for some reason when i try to access some of the data later i get this error: try: key = 'scrape011201-590652' contenttext = contentdict[key] except: print the error <type 'exceptions.KeyError'> 'scrape011201-590652' in contenttext = contentdict[key]\n', ' File "/usr/lib64/python2.5/bsddb/__init__.py", line 223, in __getitem__\n return _DeadlockWrap(lambda: self.db[key]) # self.db[key]\n', 'File "/usr/lib64/python2.5/bsddb/dbutils.py", line 62, in DeadlockWrap\n return function(*_args, **_kwargs)\n', ' File "/usr/lib64/python2.5/bsddb/__init__.py", line 223, in <lambda>\n return _DeadlockWrap(lambda: self.db[key]) # self.db[key]\n'] I am not sure what DeadlockWrap is but there isnt any other program or process accessing the berkdb or writing to it (as far as i know,) so not sure how we could get a deadlock, if its referring to that. Is it possible that I am trying to access the data to rapidly? I have this function call in a loop, so something like for i in hugelist: #try to get a value from the berkdb #do something with it I am running this with multiple datasets and this error only occurs with one of them, the largest one, not the others.

    Read the article

  • How do people manage changes to common library files stored across mutiple (Mercurial) repositories?

    - by mckoss
    This is perhaps not a question unique to Mercurial, but that's the SCM that I've been using most lately. I work on multiple projects and tend to copy source code for libraries or utilities from a previous project to get a leg up on starting a new project. The problem comes in when I want to merge all the changes I made in my latest project, back into a "master" copy of those shared library files. Since the files stored in disjoint repositories will have distinct version histories, Mercurial won't be able to perform an intelligent merge if I just copy the files back to the master repo (or even between two independent projects). I'm looking for an easy way to preserve the change history so I can merge library files back to the master with a minimum of external record keeping (which is one of the reasons I'm using SVN less as merges require remembering when copies were made across branches). Perhaps I need to do a bit more up-front organization of my repository to prepare for a future merge back to a common master.

    Read the article

  • What Counts For a DBA: Fitness

    - by Louis Davidson
    If you know me, you can probably guess that physical exercise is not really my thing. There was a time in my past when it a larger part of my life, but even then never in the same sort of passionate way as a number of our SQL friends.  For me, I find that mental exercise satisfies what I believe to be the same inner need that drives people to run farther than I like to drive on most Saturday mornings, and it is certainly just as addictive. Mental fitness shares many common traits with physical fitness, especially the need to attain it through repetitive training. I only wish that mental training burned off a bacon cheeseburger in the same manner as does jogging around a dewy park on Saturday morning. In physical training, there are at least two goals, the first of which is to be physically able to do a task. The second is to train the brain to perform the task without thinking too hard about it. No matter how long it has been since you last rode a bike, you will be almost certainly be able to hop on and start riding without thinking about the process of pedaling or balancing. If you’ve never ridden a bike, you could be a physics professor /Olympic athlete and still crash the first few times you try, even though you are as strong as an ox and your knowledge of the physics of bicycle riding makes the concept child’s play. For programming tasks, the process is very similar. As a DBA, you will come to know intuitively how to backup, optimize, and secure database systems. As a data programmer, you will work to instinctively use the clauses of Transact-SQL DML so that, when you need to group data three ways (and not four), you will know to use the GROUP BY clause with GROUPING SETS without resorting to a search engine.  You have the skill. Making it naturally then requires repetition and experience is the primary requirement, not just simply learning about a topic. The hardest part of being really good at something is this difference between knowledge and skill. I have recently taken several informative training classes with Kimball University on data warehousing and ETL. Now I have a lot more knowledge about designing data warehouses than before. I have also done a good bit of data warehouse designing of late and have started to improve to some level of proficiency with the theory. Yet, for all of this head knowledge, it is still a struggle to take what I have learned and apply it to the designs I am working on.  Data warehousing is still a task that is not yet deeply ingrained in my brain muscle memory. On the other hand, relational database design is something that no matter how much or how little I may get to do it, I am comfortable doing it. I have done it as a profession now for well over a decade, I teach classes on it, and I also have done (and continue to do) a lot of mental training beyond the work day. Sometimes the training is just basic education, some reading blogs and attending sessions at PASS events.  My best training comes from spending time working on other people’s design issues in forums (though not nearly as much as I would like to lately). Working through other people’s problems is a great way to exercise your brain on problems with which you’re not immediately familiar. The final bit of exercise I find useful for cultivating mental fitness for a data professional is also probably the nerdiest thing that I will ever suggest you do.  Akin to running in place, the idea is to work through designs in your head. I have designed more than one database system that would revolutionize grocery store operations, sales at my local Target store, the ordering process at Amazon, and ways to improve Disney World operations to get me through a line faster (some of which they are starting to implement without any of my help.) Never are the designs truly fleshed out, but enough to work through structures and processes.  On “paper”, I have designed database systems to catalog things as trivial as my Lego creations, rental car companies and my audio and video collections. Once I get the database designed mentally, sometimes I will create the database, add some data (often using Red-Gate’s Data Generator), and write a few queries to see if a concept was realistic, but I will rarely fully flesh out the database since I have no desire to do any user interface programming anymore.  The mental training allows me to keep in practice for when the time comes to do the work I love the most for real…even if I have been spending most of my work time lately building data warehouses.  If you are really strong of mind and body, perhaps you can mix a mental run with a physical run; though don’t run off of a cliff while contemplating how you might design a database to catalog the trees on a mountain…that would be contradictory to the purpose of both types of exercise.

    Read the article

< Previous Page | 692 693 694 695 696 697 698 699 700 701 702 703  | Next Page >