Search Results

Search found 52375 results on 2095 pages for 'process id'.

Page 466/2095 | < Previous Page | 462 463 464 465 466 467 468 469 470 471 472 473  | Next Page >

  • How to expand on mouse click

    - by shamim
    http://findaccountingsoftware.com/directory/gba-systems/fams-fixed-assets-management-system/ this site contain a tab container .On Applications tab clicking on + sign it goes to expand ,i want to know this process name.How to do it?.There is a strange thing occur clicking on + sign expand automatically scroll move and focus on text .what this process name is .how to do that?

    Read the article

  • Kill proccess after some time

    - by yael
    I want to limit the time of grep process command For example If I perform: grep -qsRw -m1 "parameter" /var before running grep command I want to limit the grep process to alive not longer then 30 seconds how to do this? and if it can be how to return the no limit time again Yael

    Read the article

  • How to make multiple Popups using Jquery

    - by Rajasekar
    For pop up im using the following code style : a.selected { background-color:#1F75CC; color:white; z-index:100; } .messagepop { background-color:#FFFFFF; border:1px solid #999999; cursor:default; display:none; margin-top: 15px; position:absolute; text-align:left; width:394px; z-index:50; padding: 25px 25px 20px; } label { display: block; margin-bottom: 3px; padding-left: 15px; text-indent: -15px; } .messagepop p, .messagepop.div { border-bottom: 1px solid #EFEFEF; margin: 8px 0; padding-bottom: 8px; } JavaScript : $(function() { $("#contact").live('click', function(event) { $(this).addClass("selected").parent().append('<div class="messagepop pop"><form method="post" id="new_message" action="/messages"><p><label for="email">Your email or name</label><input type="text" size="30" name="email" id="email" /></p><p><label for="body">Message</label><textarea rows="6" name="body" id="body" cols="35"></textarea></p><p><input type="submit" value="Send Message" name="commit" id="message_submit"/> or <a class="close" href="/">Cancel</a></p></form></div>'); $(".pop").slideFadeToggle() $("#email").focus(); return false; }); $(".close").live('click', function() { $(".pop").slideFadeToggle(); $("#contact").removeClass("selected"); return false; }); }); $.fn.slideFadeToggle = function(easing, callback) { return this.animate({ opacity: 'toggle', height: 'toggle' }, "fast", easing, callback); }; and finally <a href="/contact" id="contact">Contact Us</a> I need to include another pop up when the register link is clicked. Whether i should use the same function with modifications or seperate functions. Please provide me the code with modifications. Im so weird. Help me.

    Read the article

  • Why aren't the :locals hash variables being passed in to a partial, when called from inside my rake

    - by marshally
    I need to render a bunch of painfully long running partials using a rake task. When I try to pull the partial from a rake task, I get the dreaded "Called id for nil, which would mistakenly be 4" error, which usually means that my locals hash has not been properly set into the partial. Here's the rake task (some variable names have been changed to protect the innocent): namespace :precache do desc "Precache stuff" task :precache => :environment do av = ActionView::Base.new(Rails::Configuration.new.view_path, {}) av.class_eval do include ApplicationHelper end @user = User.find(21) @rank = Rank.find(2) data = av.render(:partial => "reports/listing", :locals => {:user => @user, :rank => @rank}) end end And this is the error that I am getting: ** Execute precache:precache rake aborted! Called id for nil, which would mistakenly be 4 -- if you really wanted the id of nil, use object_id On line #1 of app/views/reports/listing.html.erb 1: <%- @rid = @rank.id %> 2: <%- @cid = @user.id %> 3: <%- cache(:action => 'reports', :key => [arg1, arg2, arg3] ) do %> 4: <%- app/views/reports/_downline_js.html.erb:1 lib/tasks/precache_fragments.rake:12 rake (0.8.7) lib/rake.rb:636:in `call' rake (0.8.7) lib/rake.rb:636:in `execute' rake (0.8.7) lib/rake.rb:631:in `each' rake (0.8.7) lib/rake.rb:631:in `execute' rake (0.8.7) lib/rake.rb:597:in `invoke_with_call_chain' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/monitor.rb:242:in `synchronize' rake (0.8.7) lib/rake.rb:590:in `invoke_with_call_chain' rake (0.8.7) lib/rake.rb:583:in `invoke' rake (0.8.7) lib/rake.rb:2051:in `invoke_task' rake (0.8.7) lib/rake.rb:2029:in `top_level' rake (0.8.7) lib/rake.rb:2029:in `each' rake (0.8.7) lib/rake.rb:2029:in `top_level' rake (0.8.7) lib/rake.rb:2068:in `standard_exception_handling' rake (0.8.7) lib/rake.rb:2023:in `top_level' rake (0.8.7) lib/rake.rb:2001:in `run' rake (0.8.7) lib/rake.rb:2068:in `standard_exception_handling' rake (0.8.7) lib/rake.rb:1998:in `run' rake (0.8.7) bin/rake:31 /usr/bin/rake:19:in `load' /usr/bin/rake:19 details: I'm using Rails 2.3.5 and Ruby 1.8.7. Developing on Mac OSX. Eventually I will be deploying to Heroku.

    Read the article

  • Solving embarassingly parallel problems using Python multiprocessing

    - by gotgenes
    How does one use multiprocessing to tackle embarrassingly parallel problems? Embarassingly parallel problems typically consist of three basic parts: Read input data (from a file, database, tcp connection, etc.). Run calculations on the input data, where each calculation is independent of any other calculation. Write results of calculations (to a file, database, tcp connection, etc.). We can parallelize the program in two dimensions: Part 2 can run on multiple cores, since each calculation is independent; order of processing doesn't matter. Each part can run independently. Part 1 can place data on an input queue, part 2 can pull data off the input queue and put results onto an output queue, and part 3 can pull results off the output queue and write them out. This seems a most basic pattern in concurrent programming, but I am still lost in trying to solve it, so let's write a canonical example to illustrate how this is done using multiprocessing. Here is the example problem: Given a CSV file with rows of integers as input, compute their sums. Separate the problem into three parts, which can all run in parallel: Process the input file into raw data (lists/iterables of integers) Calculate the sums of the data, in parallel Output the sums Below is traditional, single-process bound Python program which solves these three tasks: #!/usr/bin/env python # -*- coding: UTF-8 -*- # basicsums.py """A program that reads integer values from a CSV file and writes out their sums to another CSV file. """ import csv import optparse import sys def make_cli_parser(): """Make the command line interface parser.""" usage = "\n\n".join(["python %prog INPUT_CSV OUTPUT_CSV", __doc__, """ ARGUMENTS: INPUT_CSV: an input CSV file with rows of numbers OUTPUT_CSV: an output file that will contain the sums\ """]) cli_parser = optparse.OptionParser(usage) return cli_parser def parse_input_csv(csvfile): """Parses the input CSV and yields tuples with the index of the row as the first element, and the integers of the row as the second element. The index is zero-index based. :Parameters: - `csvfile`: a `csv.reader` instance """ for i, row in enumerate(csvfile): row = [int(entry) for entry in row] yield i, row def sum_rows(rows): """Yields a tuple with the index of each input list of integers as the first element, and the sum of the list of integers as the second element. The index is zero-index based. :Parameters: - `rows`: an iterable of tuples, with the index of the original row as the first element, and a list of integers as the second element """ for i, row in rows: yield i, sum(row) def write_results(csvfile, results): """Writes a series of results to an outfile, where the first column is the index of the original row of data, and the second column is the result of the calculation. The index is zero-index based. :Parameters: - `csvfile`: a `csv.writer` instance to which to write results - `results`: an iterable of tuples, with the index (zero-based) of the original row as the first element, and the calculated result from that row as the second element """ for result_row in results: csvfile.writerow(result_row) def main(argv): cli_parser = make_cli_parser() opts, args = cli_parser.parse_args(argv) if len(args) != 2: cli_parser.error("Please provide an input file and output file.") infile = open(args[0]) in_csvfile = csv.reader(infile) outfile = open(args[1], 'w') out_csvfile = csv.writer(outfile) # gets an iterable of rows that's not yet evaluated input_rows = parse_input_csv(in_csvfile) # sends the rows iterable to sum_rows() for results iterable, but # still not evaluated result_rows = sum_rows(input_rows) # finally evaluation takes place as a chain in write_results() write_results(out_csvfile, result_rows) infile.close() outfile.close() if __name__ == '__main__': main(sys.argv[1:]) Let's take this program and rewrite it to use multiprocessing to parallelize the three parts outlined above. Below is a skeleton of this new, parallelized program, that needs to be fleshed out to address the parts in the comments: #!/usr/bin/env python # -*- coding: UTF-8 -*- # multiproc_sums.py """A program that reads integer values from a CSV file and writes out their sums to another CSV file, using multiple processes if desired. """ import csv import multiprocessing import optparse import sys NUM_PROCS = multiprocessing.cpu_count() def make_cli_parser(): """Make the command line interface parser.""" usage = "\n\n".join(["python %prog INPUT_CSV OUTPUT_CSV", __doc__, """ ARGUMENTS: INPUT_CSV: an input CSV file with rows of numbers OUTPUT_CSV: an output file that will contain the sums\ """]) cli_parser = optparse.OptionParser(usage) cli_parser.add_option('-n', '--numprocs', type='int', default=NUM_PROCS, help="Number of processes to launch [DEFAULT: %default]") return cli_parser def main(argv): cli_parser = make_cli_parser() opts, args = cli_parser.parse_args(argv) if len(args) != 2: cli_parser.error("Please provide an input file and output file.") infile = open(args[0]) in_csvfile = csv.reader(infile) outfile = open(args[1], 'w') out_csvfile = csv.writer(outfile) # Parse the input file and add the parsed data to a queue for # processing, possibly chunking to decrease communication between # processes. # Process the parsed data as soon as any (chunks) appear on the # queue, using as many processes as allotted by the user # (opts.numprocs); place results on a queue for output. # # Terminate processes when the parser stops putting data in the # input queue. # Write the results to disk as soon as they appear on the output # queue. # Ensure all child processes have terminated. # Clean up files. infile.close() outfile.close() if __name__ == '__main__': main(sys.argv[1:]) These pieces of code, as well as another piece of code that can generate example CSV files for testing purposes, can be found on github. I would appreciate any insight here as to how you concurrency gurus would approach this problem. Here are some questions I had when thinking about this problem. Bonus points for addressing any/all: Should I have child processes for reading in the data and placing it into the queue, or can the main process do this without blocking until all input is read? Likewise, should I have a child process for writing the results out from the processed queue, or can the main process do this without having to wait for all the results? Should I use a processes pool for the sum operations? If yes, what method do I call on the pool to get it to start processing the results coming into the input queue, without blocking the input and output processes, too? apply_async()? map_async()? imap()? imap_unordered()? Suppose we didn't need to siphon off the input and output queues as data entered them, but could wait until all input was parsed and all results were calculated (e.g., because we know all the input and output will fit in system memory). Should we change the algorithm in any way (e.g., not run any processes concurrently with I/O)?

    Read the article

  • jQuery: how can I clear content without getting the dreaded "stop running this script?" dialog?

    - by Cheeso
    I have a div, that holds a div. like this: <div id='reportHolder' class='column'> <div id='report'> </div> </div> Within the inner div, I add a bunch (7-12) of pairs of a and div elements, like this: <h4><a>Heading1</a></h4> <div> ...content here....</div> The total size of the content, is maybe 200k. Each div just contains a fragment of HTML. After I add all the content, I then create an accordion. like this: $('#report').accordion({collapsible:true, active:false}); This all works fine. The problem is, when I try to clear or remove the report div, it takes a looooooong time, and I get 3 or 4 popups asking "Do you want to stop running this script?" I have tried several ways: option 1: $('#report').accordion('destroy'); $('#report').remove(); $("#reportHolder").html("<div id='report'> </div>"); option 2: $('#report').accordion('destroy'); $('#report').html(''); $("#reportHolder").html("<div id='report'> </div>"); option 3: $('#report').accordion('destroy'); $("#reportHolder").html("<div id='report'> </div>"); No matter what, it hangs for a long while. The call to accordion('destroy') seems to not be the source of the delay. It's the erasure of the html content within the report div. EDIT - fixed typo. ps: this happens on FF3.5 as well as IE8 . Questions: What is taking so long? How can I remove content more quickly?

    Read the article

  • kill -9 doesn't work

    - by Daniel
    I have a server with 3 oracle instances on it, and the file system is nfs with netapp. After shutdown the databases, one process for each database doesn't quit for a long time. Each kill -i doesn't work. I tried to truss, pfile it, the command through error. And iostat shows there are lots of IO to the netapp server. So someone said the process was busy writing data to remote netapp server, and before the write complete, it won't quit. So what need to be done was just wait until all the IO was done. After wait for longer time (about 1.5 hours), the processes exit. So my question is: how can a process ignore the kill signal? As far as I know, if we kill -9, it will stop immediately. Do you encounter such situation kill -i doesn't kill the process right away? TEST7-stdby-phxdbnfs11$ ps -ef|grep dbw0 oracle 1469 25053 0 22:36:53 pts/1 0:00 grep dbw0 oracle 26795 1 0 21:55:23 ? 0:00 ora_dbw0_TEST7 oracle 1051 1 0 Apr 08 ? 3958:51 ora_dbw0_TEST2 oracle 471 1 0 Apr 08 ? 6391:43 ora_dbw0_TEST1 TEST7-stdby-phxdbnfs11$ kill -9 1051 TEST7-stdby-phxdbnfs11$ ps -ef|grep dbw0 oracle 1493 25053 0 22:37:07 pts/1 0:00 grep dbw0 oracle 26795 1 0 21:55:23 ? 0:00 ora_dbw0_TEST7 oracle 1051 1 0 Apr 08 ? 3958:51 ora_dbw0_TEST2 oracle 471 1 0 Apr 08 ? 6391:43 ora_dbw0_TEST1 TEST7-stdby-phxdbnfs11$ kill -9 471 TEST7-stdby-phxdbnfs11$ ps -ef|grep dbw0 oracle 26795 1 0 21:55:23 ? 0:00 ora_dbw0_TEST7 oracle 1051 1 0 Apr 08 ? 3958:51 ora_dbw0_TEST2 oracle 471 1 0 Apr 08 ? 6391:43 ora_dbw0_TEST1 oracle 1495 25053 0 22:37:22 pts/1 0:00 grep dbw0 TEST7-stdby-phxdbnfs11$ ps -ef|grep smon oracle 1524 25053 0 22:38:02 pts/1 0:00 grep smon TEST7-stdby-phxdbnfs11$ ps -ef|grep dbw0 oracle 1526 25053 0 22:38:06 pts/1 0:00 grep dbw0 oracle 26795 1 0 21:55:23 ? 0:00 ora_dbw0_TEST7 oracle 1051 1 0 Apr 08 ? 3958:51 ora_dbw0_TEST2 oracle 471 1 0 Apr 08 ? 6391:43 ora_dbw0_TEST1 TEST7-stdby-phxdbnfs11$ kill -9 1051 471 26795 TEST7-stdby-phxdbnfs11$ ps -ef|grep dbw0 oracle 1528 25053 0 22:38:19 pts/1 0:00 grep dbw0 oracle 26795 1 0 21:55:23 ? 0:00 ora_dbw0_TEST7 oracle 1051 1 0 Apr 08 ? 3958:51 ora_dbw0_TEST2 oracle 471 1 0 Apr 08 ? 6391:43 ora_dbw0_TEST1 TEST7-stdby-phxdbnfs11$ truss -p 26795 truss: unanticipated system error: 26795 TEST7-stdby-phxdbnfs11$ pfiles 26795 pfiles: unanticipated system error: 26795

    Read the article

  • Modern Batch Processing in Linux

    - by Castro
    What tools, languages, and infrastructure do you use for do batch processing in Linux? I am looking for something that facilitate the tasks of: Process files Log Validation Job Controlling (start,strop,reestart a process) Mysql Connection Thanks for any help!

    Read the article

  • SQLCMD Restore works in Management Studio but not from DOS prompt

    - by Gautam
    Any idea why my Restore command works fine when run in Management Studio 2008 but not when run from the dos prompt? Shown below is the error when running from the dos prompt. C:\>SQLCMD -s local\SQL2008 -d master -Q "RESTORE DATABASE [Sample.Db] FROM DISK = N'C:\Sample.Db.bak' WITH FILE = 1, MOVE N'Sample.Db' TO N'C:\Program Files\Microsoft SQL Server\MSSQL10.SQL2008\MSSQL\DATA\Sample.Db.mdf', MOVE N'Sample.Db_log' TO N'C:\Program Files\Microsoft SQL Server\MSSQL10.SQL2008\MSSQL\DATA\Sample.Db_log.ldf', NOUNLOAD, REPLACE, STATS = 10" Msg 3634, Level 16, State 1, Server GAUTAM, Line 1 The operating system returned the error '32(The process cannot access the file because it is being used by another process.)' while attempting 'RestoreContainer::ValidateTargetForCreation' on 'C:\Program Files\Microsoft SQL Server\MSSQL10.SQL2008\MSSQL\DATA\Sample.Db.mdf'. Msg 3156, Level 16, State 8, Server GAUTAM, Line 1 File 'Sample.Db' cannot be restored to 'C:\Program Files\Microsoft SQL Server\MSSQL10.SQL2008\MSSQL\DATA\Sample.Db.mdf'. Use WITH MOVE to identify a valid location for the file. Msg 3634, Level 16, State 1, Server GAUTAM, Line 1 The operating system returned the error '32(The process cannot access the file because it is being used by another process.)' while attempting 'RestoreContainer::ValidateTargetForCreation' on 'C:\Program Files\Microsoft SQL Server\MSSQL10.SQL2008\MSSQL\DATA\Sample.Db_log.ldf'. Msg 3156, Level 16, State 8, Server GAUTAM, Line 1 File 'Sample.Db_log' cannot be restored to 'C:\Program Files\Microsoft SQL Server\MSSQL10.SQL2008\MSSQL\DATA\Sample.Db_log.ldf'. Use WITH MOVE to identify a valid location for the file. Msg 3119, Level 16, State 1, Server GAUTAM, Line 1 Problems were identified while planning for the RESTORE statement. Previous messages provide details. Msg 3013, Level 16, State 1, Server GAUTAM, Line 1 RESTORE DATABASE is terminating abnormally. However if I execute this directly in Management Studio 2008, it works fine: RESTORE DATABASE [Sample.Db] FROM DISK = N'C:\Sample.Db.bak' WITH FILE = 1, MOVE N'Sample.Db' TO N'C:\Program Files\Microsoft SQL Server\MSSQL10.SQL2008\MSSQL\DATA\Sample.Db.mdf', MOVE N'Sample.Db_log' TO N'C:\Program Files\Microsoft SQL Server\MSSQL10.SQL2008\MSSQL\DATA\Sample.Db_log.ldf', NOUNLOAD, REPLACE, STATS = 10 There is no lock or security issues, the data base doesn't exist on the server. I can't figure it out. Any ideas?

    Read the article

  • jquery question - addclass to element depending on active image in rotating banner

    - by whitman6732
    I have a banner that rotates a series of three images. The active one has a display:block, while the inactive have display:none. I'm trying to match the class for the active image with an tag so that I can add an "active" class to the a tag. Having some trouble writing that function. It's adding all of the classes for each of the img elements, regardless of whether they're being displayed or not... The html markup looks like this: <div id="banner-area"> <div class="clearfix" id="promo-boxes-nav"> <ul id="banner-tabs"> <li> <ul id="page-tabs"> <li><a class="t39" href="#">1</a></li> <li><a class="t42" href="#">2</a></li> <li><a class="t49" href="#">3</a></li> </ul> </li> <li class="last">&nbsp;</li> </ul> </div> <div id="banner-content"> <img class="t39" src="http://localhost:8888/1.png" style="position: absolute; top: 0px; left: 0px; display: none; opacity: 0;"> <img class="t42" src="http://localhost:8888/2.png" style="position: absolute; top: 0px; left: 0px; display: block; opacity: 1;"> <img class="t49" src="http://localhost:8888/3.png" style="position: absolute; top: 0px; left: 0px; display: none;opacity: 0;"> </div> </div> The function in its current form looks like this: $j(document).ready(function() { $j('#banner-content') .cycle({ fx: 'fade', speed: 300, timeout: 3000, next: '.t1', pause: 1 }); if($j('#page-tabs a').attr('class')) { $j('#banner-content img').each(function() { if($j(this).attr('display','block')) { var bcClass = $j(this).attr('class'); } $j('#page-tabs li a').each(function() { if($j('#page-tabs li a').hasClass(bcClass)) { $j(this).addClass(bcClass); } }); }); } })

    Read the article

  • Why are emails sent from my applications being marked as spam?

    - by Brian
    Hi. I have 2 web apps running on the same server. The first is www.nimikri.com and the other is www.hourjar.com. Both apps share the same IP address (75.127.100.175). My server is through a shared hosting company. I've been testing my apps, and at first all my emails were being delivered to me just fine. Then a few days ago every email from both apps got dumped into my spam box (in gmail and google apps). So far the apps have just been sending emails to me and nobody else, so I know people aren't manually flagging them as spam. I did a reverse DNS lookup for my IP and the results I got were these: 100.127.75.in-addr.arpa NS DNS2.GNAX.NET. 100.127.75.in-addr.arpa NS DNS1.GNAX.NET. Should the reverse DNS lookup point to nimikri.com and hourjar.com, or are they set up fine the way they are? I noticed in the email header these 2 lines: Received: from nimikri.nimikri.com From: Hour Jar <[email protected]> Would the different domain names be causing gmail to think this is spam? Here is the header from one of the emails. Please let me know if any of this looks like a red flag for spam. Thanks. Delivered-To: [email protected] Received: by 10.231.157.85 with SMTP id a21cs54749ibx; Sun, 25 Apr 2010 10:03:14 -0700 (PDT) Received: by 10.151.130.18 with SMTP id h18mr3056714ybn.186.1272214992196; Sun, 25 Apr 2010 10:03:12 -0700 (PDT) Return-Path: <[email protected]> Received: from nimikri.nimikri.com ([75.127.100.175]) by mx.google.com with ESMTP id 28si4358025gxk.44.2010.04.25.10.03.11; Sun, 25 Apr 2010 10:03:11 -0700 (PDT) Received-SPF: neutral (google.com: 75.127.100.175 is neither permitted nor denied by best guess record for domain of [email protected]) client-ip=75.127.100.175; Authentication-Results: mx.google.com; spf=neutral (google.com: 75.127.100.175 is neither permitted nor denied by best guess record for domain of [email protected]) [email protected] Received: from nimikri.nimikri.com (localhost.localdomain [127.0.0.1]) by nimikri.nimikri.com (8.14.3/8.14.3) with ESMTP id o3PH3A7a029986 for <[email protected]>; Sun, 25 Apr 2010 12:03:11 -0500 Date: Sun, 25 Apr 2010 12:03:10 -0500 From: Hour Jar <[email protected]> To: [email protected] Message-ID: <[email protected]> Subject: [email protected] has invited you to New Event MIME-Version: 1.0 Content-Type: text/html; charset=us-ascii Content-Transfer-Encoding: 7bit

    Read the article

  • SQL Server to PostgreSQL - Migration and design concerns

    - by youwhut
    Currently migrating from SQL Server to PostgreSQL and attempting to improve a couple of key areas on the way: I have an Articles table: CREATE TABLE [dbo].[Articles]( [server_ref] [int] NOT NULL, [article_ref] [int] NOT NULL, [article_title] [varchar](400) NOT NULL, [category_ref] [int] NOT NULL, [size] [bigint] NOT NULL ) Data (comma delimited text files) is dumped on the import server by ~500 (out of ~1000) servers on a daily basis. Importing: Indexes are disabled on the Articles table. For each dumped text file Data is BULK copied to a temporary table. Temporary table is updated. Old data for the server is dropped from the Articles table. Temporary table data is copied to Articles table. Temporary table dropped. Once this process is complete for all servers the indexes are built and the new database is copied to a web server. I am reasonably happy with this process but there is always room for improvement as I strive for a real-time (haha!) system. Is what I am doing correct? The Articles table contains ~500 million records and is expected to grow. Searching across this table is okay but could be better. i.e. SELECT * FROM Articles WHERE server_ref=33 AND article_title LIKE '%criteria%' has been satisfactory but I want to improve the speed of searching. Obviously the "LIKE" is my problem here. Suggestions? SELECT * FROM Articles WHERE article_title LIKE '%criteria%' is horrendous. Partitioning is a feature of SQL Server Enterprise but $$$ which is one of the many exciting prospects of PostgreSQL. What performance hit will be incurred for the import process (drop data, insert data) and building indexes? Will the database grow by a huge amount? The database currently stands at 200 GB and will grow. Copying this across the network is not ideal but it works. I am putting thought into changing the hardware structure of the system. The thought process of having an import server and a web server is so that the import server can do the dirty work (WITHOUT indexes) while the web server (WITH indexes) can present reports. Maybe reducing the system down to one server would work to skip the copying across the network stage. This one server would have two versions of the database: one with the indexes for delivering reports and the other without for importing new data. The databases would swap daily. Thoughts? This is a fantastic system, and believe it or not there is some method to my madness by giving it a big shake up. UPDATE: I am not looking for help with relational databases, but hoping to bounce ideas around with data warehouse experts.

    Read the article

  • Django & google openid authentication with socialauth

    - by Zayatzz
    Hello I am trying to use django-socialauth (http://github.com/uswaretech/Django-Socialauth) for authenticating users for my django project. This is firs time working with openid and i've had to figure out how exactly this open id works. I have more or less understood it, by now, but there are few things that elude me. The authentication process starts when the request is put together in in django-socialauth.openid_consumer.views.begin. I can see that the outgoing authentication request is more or less something like this: https://www.google.com/accounts/o8/ud?openid.assoc_handle=AOQobUckRThPUj3K1byG280Aze-dnfc9Iu6AEYaBwvHE11G0zy8kY8GZ& openid.ax.if_available=fname& openid.ax.mode=fetch_request& openid.ax.required=email& openid.ax.type.email=http://axschema.org/contact/email& openid.ax.type.fname=http://example.com/schema/fullname& openid.claimed_id=http://specs.openid.net/auth/2.0/identifier_select& openid.identity=http://specs.openid.net/auth/2.0/identifier_select& openid.mode=checkid_setup&openid.ns=http://specs.openid.net/auth/2.0& openid.ns.ax=http://openid.net/srv/ax/1.0& openid.ns.sreg=http://openid.net/extensions/sreg/1.1& openid.realm=http://localhost/& openid.return_to=http://localhost/social/gmail_login/complete/?janrain_nonce=2010-03-20T11%3A19%3A44ZPZCjNc&openid.sreg.optional=postcode,country,nickname,email This is lot like 2nd example here: http://code.google.com/apis/accounts/docs/OpenID.html#Samples The problem is, that the request, i get back, is nothing like the corresponding example from code.google.com (look at the 3rd example in example responses. Response dict i get is like this: { 'openid.op_endpoint': 'https://www.google.com/accounts/o8/ud', 'openid.sig': 'QWMa4x4ruMUvSCfLwKV6CZRuo0E=', 'openid.ext1.type.email': 'http://axschema.org/contact/email', 'openid.return_to': 'http://localhost/social/gmail_login/complete/?janrain_nonce=2010-03-20T17%3A54%3A06ZHV4cqh', 'janrain_nonce': '2010-03-20T17:54:06ZHV4cqh', 'openid.response_nonce': '2010-03-20T17:54:06ZdC5mMu9M_6O4pw', 'openid.claimed_id': 'https://www.google.com/accounts/o8/id?id=AItOghawkFz0aNzk91vaQWhD-DxRJo6sS09RwM3SE', 'openid.mode': 'id_res', 'openid.ns.ext1': 'http://openid.net/srv/ax/1.0', 'openid.signed': 'op_endpoint,claimed_id,identity,return_to,response_nonce,assoc_handle,ns.ext1,ext1.mode,ext1.type.email,ext1.value.email', 'openid.ext1.value.email': '[email protected]', 'openid.assoc_handle': 'AOQobUfssTJ2IxRlxrIvU4Xg8HHQKKTEuqwGxvwwuPR5rNvag0elGlYL', 'openid.ns': 'http://specs.openid.net/auth/2.0', 'openid.identity': 'https://www.google.com/accounts/o8/id?id=AItOawkghgfhf1FkvaQWhD-DxRJo6sS09RwMKjASE', 'openid.ext1.mode': 'fetch_response'} The socialauth itself has been built to accept my email address this way: elif request.openid and request.openid.ax: email = request.openid.ax.get('email') And obviously this fails. Why i am asking all this is, that perhaps i am doing something wrong and my outgoing request is wrong? Or am i doing all correctly and should change the socialaouth module to accept info in a new way and then commit the change? Alan

    Read the article

  • Why does this CSS example use "height: 1%" with "overflow: auto"?

    - by Lawrence Lau
    I am reading a HTML and CSS book. It has a sample code of two-column layout. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html> <head> <style> #main {height: 1%; overflow: auto;} #main, #header, #footer {width: 768px; margin: auto;} #bodycopy { float: right; width: 598px; } #sidebar {margin-right: 608px; } #footer {clear: both; } </style> </head> <body> <div id="header" style='background-color: #AAAAAA'>This is the header.</div> <div id="main" style='background-color: #EEEEEE'> <div id="bodycopy" style='background-color: #BBBBBB'> This is the principal content.<br /> This is the principal content.<br /> This is the principal content.<br /> This is the principal content.<br /> This is the principal content.<br /> This is the principal content.<br /> This is the principal content.<br /> This is the principal content.<br /> This is the principal content.<br /> This is the principal content.<br /> This is the principal content.<br /> This is the principal content.<br /> This is the principal content.<br /> This is the principal content.<br /> This is the principal content.<br /> </div> <div id="sidebar" style='background-color: #CCCCCC'> This is the sidebar. </div> </div> <div id="footer" style='background-color: #DDDDDD'>This is the footer.</div> </body> </html> The author mentions that the use of overflow auto and 1% height will make the main area expand to encompass the computed height of content. I try to remove the 1% height and tried in different browsers but they don't show a difference. I am quite confused of its use. Any idea?

    Read the article

  • Debug checklists/Helpers with Visual Studio/.NET

    - by hmm
    I have STAThread process which when called from my process simply bombs with no exceptions and almost nothing in the call stack. If I click Cntrl+Alt+E, I get an exceptions window, how can I use it in this situation, seems like some exception is not being caught here. Also what other debugging aids can I use here in Visual Studio 2010?

    Read the article

  • get type of NSNumber

    - by okami
    I want to get the type of NSNumber instance. I found out on http://www.cocoadev.com/index.pl?NSNumber this: NSNumber *myNum = [[NSNumber alloc] initWithBool:TRUE]; if ([[myNum className] isEqualToString:@"NSCFNumber"]) { // process NSNumber as integer } else if ([[myNum className] isEqualToString:@"NSCFBoolean"]) { // process NSNumber as boolean } Ok, but this doesn't work, the [myNum className] isn't recognized by the compiler. I'm compiling for iPhone.

    Read the article

  • Sticky Footer Dilemma - so close

    - by fmz
    I have an effective sticky footer solution, but I have a slight conflict because of the need for a background image in addition to the repeating banner image across the top. In order to make the large watermark type image show up on this site I added the following div: <div id="body-outer"> The only problem is that once I add this div tag, the footer climbs up the page, overriding all content in its path. If I remove that div, the footer is as sticky as one could hope. Is there some way to have the background image and the sticky footer too? Here is the basic html structure: <body class="home"> <div id="body-outer"> <div id="container"> <div id="header"> ... <div id="footer"> <p>Placeholder for footer content</p> </div> Here is the css for the sticky footer and the background image: #body-outer { background: url(../_images/bg_body.jpg) no-repeat center top; height: 100%; } body { width: 100%; background: url(../_images/bg_html.jpg) repeat-x left top; } #container { width: 960px; margin: 0 auto; } html, body, #container { height: 100%; } body > #container { height: auto; min-height: 100%; padding-bottom: 140px; } #main { padding-bottom: 0; } #footer { position: relative; width: 100%; background-color: #4e8997; margin-top: ; /* negative value of footer height */ height: 140px; clear:both; } .clearfix:after { content: "."; display: block; height: 0; clear: both; visibility: hidden; } .clearfix {display: inline-block;} * html .clearfix { height: 1%;} Any assistance in sorting this out would be greatly appreciated. thanks.

    Read the article

  • How to do role-based access control for a franchise business?

    - by FreshCode
    I'm building the 2nd iteration of a web-based CRM+CMS for a franchise service business in ASP.NET MVC 2. I need to control access to each franchise's services based on the roles a user is assigned for that franchise. 4 examples: Receptionist should be able to book service jobs in for her "Atlantic Seaboard" franchise, but not do any reporting. Technician should be able to alter service jobs, but not modify invoices. Managers should be able to apply discount to invoices for jobs within their stores. Owner should be able to pull reports for any franchises he owns. Where should franchise-level access control fit in between the Data - Services - Web layer? If it belongs in my Controllers, how should I best implement it? Partial Schema Roles class int ID { get; set; } // primary key for Role string Name { get; set; } Partial Franchises class short ID { get; set; } // primary key for Franchise string Slug { get; set; } // unique key for URL access, eg /{franchise}/{job} string Name { get; set; } UserRoles mapping short FranchiseID; // related to franchises table Guid UserID; // related to Users table int RoleID; // related to Roles table DateTime ValidFrom; DateTime ValidUntil; Background I built the previous CRM in classic ASP and it runs the business well, but it's time for an upgrade to speed up workflow and leave less room for error. For the sake of proper testing and better separation between data and presentation, I decided to implement the repository pattern as seen in Rob Conery's MVC Storefront series. Controller Implementation Access Control with [Authorize] attribute If there was just one franchise involved, I could simply limit access to a controller action like so: [Authorize(Roles="Receptionist, Technician, Manager, Owner")] public ActionResult CreateJob(Job job) { ... } And since franchises don't just pop up over night, perhaps this is a strong case to use the new Areas feature in ASP.NET MVC 2? Or would this lead to duplicate Views? Controllers, URL Routing & Areas Assuming Areas aren't used, what would be the best way to determine which franchise's data is being accessed? I thought of this: {franchise}/{controller}/{action}/{id} or is it better to determine a job's franchise in a Details(...) action and limit a user's action with [Authorize]: {job}/{id}/{action}/{subaction} {invoice}/{id}/{action}/{subaction} which makes more sense if any user could potentially have access to more than one franchise without cluttering the URL with a {franchise} parameter. Any input is appreciated.

    Read the article

  • Reslet with GWT and Tomcat Error

    - by Holograham
    I am getting this error on startup I am using GWT 2.0.3 and Reslet RC3 type Exception report message description The server encountered an internal error () that prevented it from fulfilling this request. exception javax.servlet.ServletException: Servlet.init() for servlet adapter threw exception org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852) org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588) org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) java.lang.Thread.run(Thread.java:619) root cause java.lang.NoSuchMethodError: org.restlet.Client.handle(Lorg/restlet/Request;)Lorg/restlet/Response; org.restlet.ext.servlet.ServerServlet.createComponent(ServerServlet.java:423) org.restlet.ext.servlet.ServerServlet.getComponent(ServerServlet.java:763) org.restlet.ext.servlet.ServerServlet.init(ServerServlet.java:881) javax.servlet.GenericServlet.init(GenericServlet.java:212) org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852) org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588) org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) java.lang.Thread.run(Thread.java:619) My web XML is as follows: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE web-app PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN" "http://java.sun.com/dtd/web-app_2_3.dtd" > <web-app> <welcome-file-list> <welcome-file>index.html</welcome-file> </welcome-file-list> <context-param> <param-name>org.restlet.clients</param-name> <param-value>CLAP FILE WAR</param-value> </context-param> <servlet> <servlet-name>adapter</servlet-name> <servlet-class>org.restlet.ext.servlet.ServerServlet</servlet-class> <init-param> <param-name>org.restlet.application</param-name> <param-value>com.tdc.Propspace.server.TestServerApplication</param-value> </init-param> </servlet> <servlet-mapping> <servlet-name>adapter</servlet-name> <url-pattern>/*</url-pattern> </servlet-mapping> </web-app> Any ideas what would cause this?

    Read the article

  • How do I debug an MPI program?

    - by Jay Conrod
    I have an MPI program which compiles and runs, but I would like to step through it to make sure nothing bizarre is happening. Ideally, I would like a simple way to attach GDB to any particular process, but I'm not really sure whether that's possible or how to do it. An alternative would be having each process write debug output to a separate log file, but this doesn't really give the same freedom as a debugger. Are there better approaches? How do you debug MPI programs?

    Read the article

  • Visual Studio 2010 color theme

    - by James Jones
    Has anyone found/made any decent color themes for Visual Studio 2010? My VS 2005/2008 themes seem to be getting mangled in the upgrade process. I'm accustomed to using the ever famous Oren Ellenbogen's Dark Scheme featured on Scott Hanselman's blog, but the upgrade process has made it downright butt-ugly. Does anyone have any gems they'd like to share?

    Read the article

  • Backbone.Marionette - Collection within CompositeView, which itself is nested in a CollectionView

    - by nicefinly
    *UPDATE: The problem probably involves the tour-template as I've discovered that it thinks the 'name' attribute is undefined. This leads me to think that it's not an array being passed on to the ToursView, but for some reason a string. * After studying similar questions on StackOverflow: How to handle nested CompositeView using Backbone.Marionette? How do you properly display a Backbone marionette collection view based on a model javascript array property? Nested collections with Backbone.Marionette ... and Derick Bailey's excellent blog's on this subject: http://lostechies.com/derickbailey/2012/04/05/composite-views-tree-structures-tables-and-more/ ... including the JSFiddle's: http://jsfiddle.net/derickbailey/AdWjU/ I'm still having trouble with a displaying the last node of a nested CollectionView of CompositeViews. It is the final CollectionView within each CompositeView that is causing the problem. CollectionView { CompositeView{ CollectionView {} //**<-- This is the troublemaker!** } } NOTE: I have already made a point of creating a valid Backbone.Collection given that the collection passed on to the final, child CollectionView is just a simple array. Data returned from the api to ToursList: [ { "id": "1", "name": "Venice", "theTours": "[ {'name': u'test venice'}, {'name': u'test venice 2'} ]" }, { "id": "2", "name": "Rome", "theTours": "[ {'name': u'Test rome'} ]" }, { "id": "3", "name": "Dublin", "theTours": "[ {'name': u'test dublin'}, {'name': u'test dublin 2'} ]" } ] I'm trying to nest these in a dropdown where the nav header is the 'name' (i.e. Dublin), and the subsequent li 's are the individual tour names (i.e. 'test dublin', 'test dublin2', etc.) Tour Models and Collections ToursByLoc = TastypieModel.extend({}); ToursList = TastypieCollection.extend({ model: ToursByLoc, url:'/api/v1/location/', }); Tour Views ToursView = Backbone.Marionette.ItemView.extend({ template: '#tour-template', tagName: 'li', }); ToursByLocView = Backbone.Marionette.CompositeView.extend({ template: '#toursByLoc-template', itemView: ToursView, initialize: function(){ //As per Derick Bailey's comments regarding the need to pass on a //valid Backbone.Collection to the child CollectionView //REFERENCE: http://stackoverflow.com/questions/12163118/nested-collections-with-backbone-marionette var theTours = this.model.get('theTours'); this.collection = new Backbone.Collection(theTours); }, appendHtml: function(collectionView, itemView){ collectionView.$('div').append(itemView.el); } }); ToursListView = Backbone.Marionette.CollectionView.extend({ itemView: ToursByLocView, }); Templates <script id="tour-template" type="text/template"> <%= name %> </script> <script id="toursByLoc-template" type="text/template"> <li class="nav-header"><%= name %></li> <div class="indTours"></div> <li class="divider"></li> </script>

    Read the article

  • formcollection not see me all inputs

    - by sanfra1983
    Hi, I have a View where some input text to be added dynamica using jquery, I mean everything funzona, and when I go to add these inputs and do right button on the browser I'm not seeing the added input. function addPerson () ( current + +; StrToAdd var = '<table id="compo" name="compo"' + current +'> <tr> <td> <label for="firstname"' + current +'"> Name </ label> <input id = 'firstname' + current + '"name =" Componenti.Nome_ "' +" "+ current +" "+ '" size = "29" /> </ td> <td> <label for = "lastname"' current + + '"> Name </ label> <input id="lastname''" name="Componenti.Cognome_"' + + + current + current'" size="29" /> </ td>' StrToAdd + = '<td> <label for="luogonascita"' + current +'"> LuogoNascita </ label> <input id = "luogodinascita' + current + '" name = "Componenti.Luogonascita_"' + "" + current + "" + '"size =" 29 "/> </ td>' StrToAdd + = '<td> <label for="datanascita"' + current +'"> DataNacita </ label> <input id = "dateOfBirth' + current + '" name = "Componenti.datanascita_"' + current + ' "size =" 29 "/> </ td> </ tr> </ table> ' StrToAdd + = '<script type="text/javascript"> jQuery (function ($) {$("# dateOfBirth' + current + '). mask ("' + mask +'")});</ script > '; $ ('# Components'). Append (StrToAdd); ) The problem is that when I pass the data via post in the action, and I go to create the education var valueProvider= formanagrafica.ToValueProvider(); valueProvider I find all the input I have added only one and that therefore there are more than one gives me the values separated by commas. How can I retrieve the values of a line of input text? I hope I explained correctly.

    Read the article

  • Adobe Air Debugger not visible

    - by Chin
    Have a little conundrum, when I debug my air app. It goes through the build process and seemingly launches. I can see the process in windows task manager, but can not see the actual application. Kind of strange I know. If anyone has had the same problem and can offer a shortcut to success, I'd much appreciate a pointer. Thanks

    Read the article

  • Django & google openid authentication (openid.ax) with socialauth

    - by Zayatzz
    Hello I am trying to use django-socialauth (http://github.com/uswaretech/Django-Socialauth) for authenticating users for my django project. This is firs time working with openid and i've had to figure out how exactly this open id works. I have more or less understood it, by now, but there are few things that elude me. The authentication process starts when the request is put together in in django-socialauth.openid_consumer.views.begin. I can see that the outgoing authentication request is more or less something like this: https://www.google.com/accounts/o8/ud?openid.assoc_handle=AOQobUckRThPUj3K1byG280Aze-dnfc9Iu6AEYaBwvHE11G0zy8kY8GZ& openid.ax.if_available=fname& openid.ax.mode=fetch_request& openid.ax.required=email& openid.ax.type.email=http://axschema.org/contact/email& openid.ax.type.fname=http://example.com/schema/fullname& openid.claimed_id=http://specs.openid.net/auth/2.0/identifier_select& openid.identity=http://specs.openid.net/auth/2.0/identifier_select& openid.mode=checkid_setup&openid.ns=http://specs.openid.net/auth/2.0& openid.ns.ax=http://openid.net/srv/ax/1.0& openid.ns.sreg=http://openid.net/extensions/sreg/1.1& openid.realm=http://localhost/& openid.return_to=http://localhost/social/gmail_login/complete/?janrain_nonce=2010-03-20T11%3A19%3A44ZPZCjNc&openid.sreg.optional=postcode,country,nickname,email This is lot like 2nd example here: http://code.google.com/apis/accounts/docs/OpenID.html#Samples The problem is, that the request, i get back, is nothing like the corresponding example from code.google.com (look at the 3rd example in example responses. Response dict i get is like this: { 'openid.op_endpoint': 'https://www.google.com/accounts/o8/ud', 'openid.sig': 'QWMa4x4ruMUvSCfLwKV6CZRuo0E=', 'openid.ext1.type.email': 'http://axschema.org/contact/email', 'openid.return_to': 'http://localhost/social/gmail_login/complete/?janrain_nonce=2010-03-20T17%3A54%3A06ZHV4cqh', 'janrain_nonce': '2010-03-20T17:54:06ZHV4cqh', 'openid.response_nonce': '2010-03-20T17:54:06ZdC5mMu9M_6O4pw', 'openid.claimed_id': 'https://www.google.com/accounts/o8/id?id=AItOghawkFz0aNzk91vaQWhD-DxRJo6sS09RwM3SE', 'openid.mode': 'id_res', 'openid.ns.ext1': 'http://openid.net/srv/ax/1.0', 'openid.signed': 'op_endpoint,claimed_id,identity,return_to,response_nonce,assoc_handle,ns.ext1,ext1.mode,ext1.type.email,ext1.value.email', 'openid.ext1.value.email': '[email protected]', 'openid.assoc_handle': 'AOQobUfssTJ2IxRlxrIvU4Xg8HHQKKTEuqwGxvwwuPR5rNvag0elGlYL', 'openid.ns': 'http://specs.openid.net/auth/2.0', 'openid.identity': 'https://www.google.com/accounts/o8/id?id=AItOawkghgfhf1FkvaQWhD-DxRJo6sS09RwMKjASE', 'openid.ext1.mode': 'fetch_response'} The socialauth itself has been built to accept my email address this way: elif request.openid and request.openid.ax: email = request.openid.ax.get('email') And obviously this fails. Why i am asking all this is, that perhaps i am doing something wrong and my outgoing request is wrong? Or am i doing all correctly and should change the socialaouth module to accept info in a new way and then commit the change? Alan

    Read the article

< Previous Page | 462 463 464 465 466 467 468 469 470 471 472 473  | Next Page >