Search Results

Search found 3948 results on 158 pages for '19 lee'.

Page 10/158 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • Perfect Your MySQL Database Administrators Skills

    - by Antoinette O'Sullivan
    With its proven ease-of-use, performance, and scalability, MySQL has become the leading database choice for web-based applications, used by high profile web properties including Google, Yahoo!, Facebook, YouTube, Wikipedia and thousands of mid-sized companies. Many organizations deploy both Oracle Database and MySQL side by side to serve different needs, and as a database professional you can find training courses on both topics at Oracle University! Check out the upcoming Oracle Database training courses and MySQL training courses. Even if you're only managing Oracle Databases at this point of time, getting familiar with MySQL Database will broaden your career path with growing job demand. Hone your skills as a MySQL Database Administrator by taking the MySQL for Database Administrators course which teaches you how to secure privileges, set resource limitations, access controls and describe backup and recovery basics. You also learn how to create and use stored procedures, triggers and views. You can take this 5 day course through three delivery methods: Training-on-Demand: Take this course at your own pace and at a time that suits you through this high-quality streaming video delivery. You also get to schedule time on a classroom environment to perform the hands-on exercises. Live-Virtual: Attend a live instructor led event from your own desk. 100s of events already of the calendar in many timezones. In-Class: Travel to an education center to attend this class. A sample of events is shown below:  Location  Date  Delivery Language  Budapest, Hungary  26 November 2012  Hungarian  Prague, Czech Republic  19 November 2012  Czech  Warsaw, Poland  10 December 2012  Polish  Belfast, Northern Ireland  26 November, 2012  English  London, England  26 November, 2012  English  Rome, Italy  19 November, 2012  Italian  Lisbon, Portugal  12 November, 2012  European Portugese  Porto, Portugal  21 January, 2013  European Portugese  Amsterdam, Netherlands  19 November, 2012  Dutch  Nieuwegein, Netherlands  8 April, 2013  Dutch  Barcelona, Spain  4 February, 2013  Spanish  Madrid, Spain  19 November, 2012  Spanish  Mechelen, Belgium  25 February, 2013  English  Windhof, Luxembourg  19 November, 2012  English  Johannesburg, South Africa  9 December, 2012  English  Cairo, Egypt  20 October, 2012  English  Nairobi, Kenya  26 November, 2012  English  Petaling Jaya, Malaysia  29 October, 2012  English  Auckland, New Zealand  5 November, 2012  English  Wellington, New Zealand  23 October, 2012  English  Brisbane, Australia  19 November, 2012  English  Edmonton, Canada  7 January, 2013  English  Vancouver, Canada  7 January, 2013  English  Ottawa, Canada  22 October, 2012  English  Toronto, Canada  22 October, 2012  English  Montreal, Canada  22 October, 2012  English  Mexico City, Mexico  10 December, 2012  Spanish  Sao Paulo, Brazil  10 December, 2012  Brazilian Portugese For more information on this course or any aspect of the MySQL curriculum, visit http://oracle.com/education/mysql.

    Read the article

  • Using Windows Previous Versions to access ZFS Snapshots (July 14, 2009)

    - by user12612012
    The Previous Versions tab on the Windows desktop provides a straightforward, intuitive way for users to view or recover files from ZFS snapshots.  ZFS snapshots are read-only, point-in-time instances of a ZFS dataset, based on the same copy-on-write transactional model used throughout ZFS.  ZFS snapshots can be used to recover deleted files or previous versions of files and they are space efficient because unchanged data is shared between the file system and its snapshots.  Snapshots are available locally via the .zfs/snapshot directory and remotely via Previous Versions on the Windows desktop. Shadow Copies for Shared Folders was introduced with Windows Server 2003 but subsequently renamed to Previous Versions with the release of Windows Vista and Windows Server 2008.  Windows shadow copies, or snapshots, are based on the Volume Snapshot Service (VSS) and, as the [Shared Folders part of the] name implies, are accessible to clients via SMB shares, which is good news when using the Solaris CIFS Service.  And the nice thing is that no additional configuration is required - it "just works". On Windows clients, snapshots are accessible via the Previous Versions tab in Windows Explorer using the Shadow Copy client, which is available by default on Windows XP SP2 and later.  For Windows 2000 and pre-SP2 Windows XP, the client software is available for download from Microsoft: Shadow Copies for Shared Folders Client. Assuming that we already have a shared ZFS dataset, we can create ZFS snapshots and view them from a Windows client. zfs snapshot tank/home/administrator@snap101zfs snapshot tank/home/administrator@snap102 To view the snapshots on Windows, map the dataset on the client then right click on a folder or file and select Previous Versions.  Note that Windows will only display previous versions of objects that differ from the originals.  So you may have to modify files after creating a snapshot in order to see previous versions of those files. The screenshot above shows various snapshots in the Previous Versions window, created at different times.  On the left panel, the .zfs folder is visible, illustrating that this is a ZFS share.  The .zfs setting can be toggled as desired, it makes no difference when using previous versions.  To make the .zfs folder visible: zfs set snapdir=visible tank/home/administrator To hide the .zfs folder: zfs set snapdir=hidden tank/home/administrator The following screenshot shows the Previous Versions panel when a file has been selected.  In this case the user is prompted to view, copy or restore the file from one of the available snapshots. As can be seen from the screenshots above, the Previous Versions window doesn't display snapshot names: snapshots are listed by snapshot creation time, sorted in time order from most recent to oldest.  There's nothing we can do about this, it's the way that the interface works.  Perhaps one point of note, to avoid confusion, is that the ZFS snapshot creation time isnot the same as the root directory creation timestamp. In ZFS, all object attributes in the original dataset are preserved when a snapshot is taken, including the creation time of the root directory.  Thus the root directory creation timestamp is the time that the directory was created in the original dataset. # ls -d% all /home/administrator         timestamp: atime         Mar 19 15:40:23 2009         timestamp: ctime         Mar 19 15:40:58 2009         timestamp: mtime         Mar 19 15:40:58 2009         timestamp: crtime         Mar 19 15:18:34 2009 # ls -d% all /home/administrator/.zfs/snapshot/snap101         timestamp: atime         Mar 19 15:40:23 2009         timestamp: ctime         Mar 19 15:40:58 2009         timestamp: mtime         Mar 19 15:40:58 2009         timestamp: crtime         Mar 19 15:18:34 2009 The snapshot creation time can be obtained using the zfs command as shown below. # zfs get all tank/home/administrator@snap101NAME                             PROPERTY  VALUEtank/home/administrator@snap101  type      snapshottank/home/administrator@snap101  creation  Mon Mar 23 18:21 2009 In this example, the dataset was created on March 19th and the snapshot was created on March 23rd. In conclusion, Shadow Copies for Shared Folders provides a straightforward way for users to view or recover files from ZFS snapshots.  The Windows desktop provides an easy to use, intuitive GUI and no configuration is required to use or access previous versions of files or folders. REFERENCES FOR MORE INFORMATION ZFS ZFS Learning Center Introduction to Shadow Copies of Shared Folders Shadow Copies for Shared Folders Client

    Read the article

  • C# - Parse HTML source as XML

    - by fonix232
    I would like to read in a dynamic URL what contains a HTML file, and read it like an XML file, based on nodes (HTML tags). Is this somehow possible? I mean, there is this HTML code: <table class="bidders" cellpadding="0" cellspacing="0"> <tr class="bidRow4"> <td>kucik (automata)</td> <td class="right">9 374 Ft</td> <td class="bidders_date">2010-06-10 18:19:52</td> </tr> <tr class="bidRow4"> <td>macszaf (automata)</td> <td class="right">9 373 Ft</td> <td class="bidders_date">2010-06-10 18:19:52</td> </tr> <tr class="bidRow2"> <td>kucik (automata)</td> <td class="right">9 372 Ft</td> <td class="bidders_date">2010-06-10 18:19:42</td> </tr> <tr class="bidRow2"> <td>macszaf (automata)</td> <td class="right">9 371 Ft</td> <td class="bidders_date">2010-06-10 18:19:42</td> </tr> <tr class="bidRow0"> <td>kucik (automata)</td> <td class="right">9 370 Ft</td> <td class="bidders_date">2010-06-10 18:19:32</td> </tr> <tr class="bidRow0"> <td>macszaf (automata)</td> <td class="right">9 369 Ft</td> <td class="bidders_date">2010-06-10 18:19:32</td> </tr> <tr class="bidRow8"> <td>kucik (automata)</td> <td class="right">9 368 Ft</td> <td class="bidders_date">2010-06-10 18:19:22</td> </tr> <tr class="bidRow8"> <td>macszaf (automata)</td> <td class="right">9 367 Ft</td> <td class="bidders_date">2010-06-10 18:19:22</td> </tr> <tr class="bidRow6"> <td>kucik (automata)</td> <td class="right">9 366 Ft</td> <td class="bidders_date">2010-06-10 18:19:12</td> </tr> <tr class="bidRow6"> <td>macszaf (automata)</td> <td class="right">9 365 Ft</td> <td class="bidders_date">2010-06-10 18:19:12</td> </tr> </table> I want to parse this into a ListView (or a Grid) to create rows with the data contained. All tr are different row, and all td in a given td is a column in the given row. And also I want it to be as fast as possible, as it would update itself in 5 seconds. Is there any library for this?

    Read the article

  • Hibernate MapKeyManyToMany gives composite key where none exists

    - by larsrc
    I have a Hibernate (3.3.1) mapping of a map using a three-way join table: @Entity public class SiteConfiguration extends ConfigurationSet { @ManyToMany @MapKeyManyToMany(joinColumns=@JoinColumn(name="SiteTypeInstallationId")) @JoinTable( name="SiteConfig_InstConfig", joinColumns = @JoinColumn(name="SiteConfigId"), inverseJoinColumns = @JoinColumn(name="InstallationConfigId") ) Map<SiteTypeInstallation, InstallationConfiguration> installationConfigurations = new HashMap<SiteTypeInstallation, InstallationConfiguration>(); ... } The underlying table (in Oracle 11g) is: Name Null Type ------------------------------ -------- ---------- SITECONFIGID NOT NULL NUMBER(19) SITETYPEINSTALLATIONID NOT NULL NUMBER(19) INSTALLATIONCONFIGID NOT NULL NUMBER(19) The key entity used to have a three-column primary key in the database, but is now redefined as: @Entity public class SiteTypeInstallation implements IdResolvable { @Id @GeneratedValue(generator="SiteTypeInstallationSeq", strategy= GenerationType.SEQUENCE) @SequenceGenerator(name = "SiteTypeInstallationSeq", sequenceName = "SEQ_SiteTypeInstallation", allocationSize = 1) long id; @ManyToOne @JoinColumn(name="SiteTypeId") SiteType siteType; @ManyToOne @JoinColumn(name="InstalationRoleId") InstallationRole role; @ManyToOne @JoinColumn(name="InstallationTypeId") InstType type; ... } The table for this has a primary key 'Id' and foreign key constraints+indexes for each of the other columns: Name Null Type ------------------------------ -------- ---------- SITETYPEID NOT NULL NUMBER(19) INSTALLATIONROLEID NOT NULL NUMBER(19) INSTALLATIONTYPEID NOT NULL NUMBER(19) ID NOT NULL NUMBER(19) For some reason, Hibernate thinks the key of the map is composite, even though it isn't, and gives me this error: org.hibernate.MappingException: Foreign key (FK1A241BE195C69C8:SiteConfig_InstConfig [SiteTypeInstallationId])) must have same number of columns as the referenced primary key (SiteTypeInstallation [SiteTypeId,InstallationRoleId]) If I remove the annotations on installationConfigurations and make it transient, the error disappears. I am very confused why it thinks SiteTypeInstallation has a composite key at all when @Id is clearly defining a simple key, and doubly confused why it picks exactly just those two columns. Any idea why this happens? Is it possible that JBoss (5.0 EAP) + Hibernate somehow remembers a mistaken idea of the primary key across server restarts and code redeployments? Thanks in advance, -Lars

    Read the article

  • The Zen of Python distils the guiding principles for Python into 20 aphorisms but lists only 19. What's the twentieth?

    - by Jeff Walden
    From PEP 20, The Zen of Python: Long time Pythoneer Tim Peters succinctly channels the BDFL's guiding principles for Python's design into 20 aphorisms, only 19 of which have been written down. What is this twentieth aphorism? Does it exist, or is the reference merely a rhetorical device to make the reader think? (One potential answer that occurs to me is that "You aren't going to need it" is the remaining aphorism. If that were the case, it would both exist and act to make the reader think, and it would be characteristically playful, thus fitting the list all the better. But web searches suggest this to be an extreme programming mantra, not intrinsically Pythonic wisdom, so I'm stumped.)

    Read the article

  • Rails - session information being cleared?

    - by Jty.tan
    Hi! I'm having a weird issue that I can't track down... For context, I have resources of Users, Registries, and Giftlines. Each User has many Registries. Each Registry has many Giftlines. It's a belongs to association for them in a reverse manner. What is basically happening, is that when I am creating a giftline, the giftline itself is created properly, and linked to its associated Registry properly, but then in the process of being redirected back to the Registry show page, the session[:user_id] variable is cleared and I'm logged out. As far as I can tell, where it goes wrong is here in the registries_controller: def show @registry = Registry.find(params[:id]) @user = User.find(@registry.user_id) if (params[:user_id] && (@user.login != params[:user_id]) ) flash[:notice] = "User #{params[:user_id]} does not have such a registry." redirect_to user_registries_path(session[:user_id]) end end Now, to be clear, I can do a show of the registry normally, and nothing weird happens. It's only when I've added a giftline does the session[:user_id] variable get cleared. I used the debugger and this is what seems to be happening. (rdb:19) list [20, 29] in /Users/kriston/Dropbox/ruby_apps/bee_registered/app/controllers/registries_controller.rb 20 render :action => 'new' 21 end 22 end 23 24 def show => 25 @registry = Registry.find(params[:id]) 26 @user = User.find(@registry.user_id) 27 if (params[:user_id] && (@user.login != params[:user_id]) ) 28 flash[:notice] = "User #{params[:user_id]} does not have such a registry." 29 redirect_to user_registries_path(session[:user_id]) (rdb:19) session[:user_id] "tester" (rdb:19) So from there we can see that the code has gotten back to the show command after the item had been added, and that the session[:user_id] variable is still set. (rdb:19) list [22, 31] in /Users/kriston/Dropbox/ruby_apps/bee_registered/app/controllers/registries_controller.rb 22 end 23 24 def show 25 @registry = Registry.find(params[:id]) 26 @user = User.find(@registry.user_id) => 27 if (params[:user_id] && (@user.login != params[:user_id]) ) 28 flash[:notice] = "User #{params[:user_id]} does not have such a registry." 29 redirect_to user_registries_path(session[:user_id]) 30 end 31 end (rdb:19) session[:user_id] "tester" (rdb:19) Stepping on, we get to this point. And the session[:user_id] is still set. At this point, the URL is of the format localhost:3000/registries/:id, so params[:user_id] fails, and the if condition doesn't occur. (Unless I am completely wrong .<) So then the next bit occurs, which is (rdb:19) list [1327, 1336] in /Library/Ruby/Gems/1.8/gems/actionpack-2.3.5/lib/action_controller/base.rb 1327 end 1328 1329 def perform_action 1330 if action_methods.include?(action_name) 1331 send(action_name) => 1332 default_render unless performed? 1333 elsif respond_to? :method_missing 1334 method_missing action_name 1335 default_render unless performed? 1336 else (rdb:19) session[:user_id] "tester" And then when I hit next... (rdb:19) next 2: session[:user_id] = /Library/Ruby/Gems/1.8/gems/actionpack-2.3.5/lib/action_controller/filters.rb:618 return index if nesting != 0 || aborted (rdb:19) list [613, 622] in /Library/Ruby/Gems/1.8/gems/actionpack-2.3.5/lib/action_controller/filters.rb 613 private 614 def call_filters(chain, index, nesting) 615 index = run_before_filters(chain, index, nesting) 616 aborted = @before_filter_chain_aborted 617 perform_action_without_filters unless performed? || aborted => 618 return index if nesting != 0 || aborted 619 run_after_filters(chain, index) 620 end 621 622 def run_before_filters(chain, index, nesting) (rdb:19) session {:user_id=>nil, :session_id=>"49992cdf2ddc708b441807f998af7ddc", :return_to=>"/registries", "flash"=>{}, :_csrf_token=>"xMDI0oDaOgbzhQhDG7EqOlGlxwIhHlB6c71fWgOIKcs="} The session[:user_id] is cleared, and when the page renders, I'm logged out. .< Sooo.... Any idea why this is occurring? It just occurred to me that I'm not sure if I'm meant to be pasting large chunks of debug output in here... Somebody point out to me if I'm not meant to be doing this. . And yes, this only occurs when I have added a giftitem, and it is sending me back to the registry page. When I'm viewing it, the same code occurs, but the session[:user_id] variable isn't cleared. It's driving me mildly insane. Thanks!

    Read the article

  • Why isn't MediaWiki loading?

    - by E L
    I recently set up MediaWiki on an Apache server with PostgreSQL. It installed successfully. However, when I try to access the website, I get a blank page. The error log reports the following. [error] PHP Fatal error: require_once(): Failed opening required '/var/www/mediawiki-1.19.2/LocalSettings.php' (include_path='.:/usr/share/pear:/usr/share/php') in /var/www/mediawiki-1.19.2/includes/WebStart.php on line 134 [error] PHP Warning: require_once(/var/www/mediawiki-1.19.2/LocalSettings.php): failed to open stream: Permission denied in /var/www/mediawiki-1.19.2/includes/WebStart.php on line 134 [error] PHP Fatal error: require_once(): Failed opening required '/var/www/mediawiki-1.19.2/LocalSettings.php' (include_path='.:/usr/share/pear:/usr/share/php') in /var/www/mediawiki-1.19.2/includes/WebStart.php on line 134 I've seen other people with similar problems and the solutions have involved using chmod on LocalSettings.php to 644 or in other cases 755. Others have said using chown to make LocalSettings match the Apache user, which is just 'apache' in my case. None of these solutions have worked for me. Does anyone have other suggestions or maybe I missed something?

    Read the article

  • Jira access with AJP-Proxy

    - by user60869
    I want to Configure the Jira-Acces over APJ-Proxy. I proceeded as follows (Following this howto: http://confluence.atlassian.com/display/JIRA/Configuring+Apache+Reverse+Proxy+Using+the+AJP+Protocol) : 1) In the server.xml I activate the AJP: 2) Edit VHOST Konfiguration: # Load Proxy-Modules LoadModule proxy_module /usr/lib/apache2/modules/mod_proxy.so LoadModule proxy_http_module /usr/lib/apache2/modules/mod_proxy_http.so # Load AJP-Modules LoadModule proxy_ajp_module /usr/lib/apache2/modules/mod_proxy_ajp.so # Proxy Configuration <IfModule proxy_http_module> ProxyRequests Off ProxyPreserveHost On # Basic AuthType configuration <Proxy *> AuthType Basic AuthName Bamboo-Server AuthUserFile /var/www/userdb Require valid-user AddDefaultCharset off Order deny,allow Deny from all Allow from 192.168.0.1 satisfy any </Proxy> ProxyPass /bamboo http://localhost:8085/bamboo ProxyPassReverse /bamboo http://localhost:8085/bamboo ProxyPass /jira ajp://localhost:8009/ ProxyPassReverse /jira ajp://localhost:8009/ </IfModule> EDIT: In the logs if found follow: //localhost:8080/ [Fri Nov 19 14:51:13 2010] [debug] proxy_util.c(1819): proxy: worker ajp://localhost:8080/ already initialized [Fri Nov 19 14:51:13 2010] [debug] proxy_util.c(1913): proxy: initialized single connection worker 1 in child 5578 for (localhost) [Fri Nov 19 14:51:32 2010] [error] ajp_read_header: ajp_ilink_receive failed [Fri Nov 19 14:51:32 2010] [error] (120006)APR does not understand this error code: proxy: read response failed from (null) (localhost) [Fri Nov 19 14:51:32 2010] [debug] proxy_util.c(2008): proxy: AJP: has released connection for (localhost) [Fri Nov 19 14:51:32 2010] [debug] mod_deflate.c(615): [client xx.xx.xx.xx Zlib: Compressed 468 to 320 : URL /jira But It dosen´t work. Somebody have an idea?

    Read the article

  • What is the effect of this order_by clause?

    - by bread
    I don't understand what this order_by clause is doing and whether I need it or not: select c.customerid, c.firstname, c.lastname, i.order_date, i.item, i.price from items_ordered i, customers c where i.customerid = c.customerid group by c.customerid, i.item, i.order_date order by i.order_date desc; This produces this data: 10330 Shawn Dalton 30-Jun-1999 Pogo stick 28.00 10101 John Gray 30-Jun-1999 Raft 58.00 10410 Mary Ann Howell 30-Jan-2000 Unicycle 192.50 10101 John Gray 30-Dec-1999 Hoola Hoop 14.75 10449 Isabela Moore 29-Feb-2000 Flashlight 4.50 10410 Mary Ann Howell 28-Oct-1999 Sleeping Bag 89.22 10339 Anthony Sanchez 27-Jul-1999 Umbrella 4.50 10449 Isabela Moore 22-Dec-1999 Canoe 280.00 10298 Leroy Brown 19-Sep-1999 Lantern 29.00 10449 Isabela Moore 19-Mar-2000 Canoe paddle 40.00 10413 Donald Davids 19-Jan-2000 Lawnchair 32.00 10330 Shawn Dalton 19-Apr-2000 Shovel 16.75 10439 Conrad Giles 18-Sep-1999 Tent 88.00 10298 Leroy Brown 18-Mar-2000 Pocket Knife 22.38 10299 Elroy Keller 18-Jan-2000 Inflatable Mattress 38.00 10438 Kevin Smith 18-Jan-2000 Tent 79.99 10101 John Gray 18-Aug-1999 Rain Coat 18.30 10449 Isabela Moore 15-Dec-1999 Bicycle 380.50 10439 Conrad Giles 14-Aug-1999 Ski Poles 25.50 10449 Isabela Moore 13-Aug-1999 Unicycle 180.79 10101 John Gray 08-Mar-2000 Sleeping Bag 88.70 10299 Elroy Keller 06-Jul-1999 Parachute 1250.00 10438 Kevin Smith 02-Nov-1999 Pillow 8.50 10101 John Gray 02-Jan-2000 Lantern 16.00 10315 Lisa Jones 02-Feb-2000 Compass 8.00 10449 Isabela Moore 01-Sep-1999 Snow Shoes 45.00 10438 Kevin Smith 01-Nov-1999 Umbrella 6.75 10298 Leroy Brown 01-Jul-1999 Skateboard 33.00 10101 John Gray 01-Jul-1999 Life Vest 125.00 10330 Shawn Dalton 01-Jan-2000 Flashlight 28.00 10298 Leroy Brown 01-Dec-1999 Helmet 22.00 10298 Leroy Brown 01-Apr-2000 Ear Muffs 12.50 While if I remove the order_by clause completely, as in this query: select c.customerid, c.firstname, c.lastname, i.order_date, i.item, i.price from items_ordered i, customers c where i.customerid = c.customerid group by c.customerid, i.item, i.order_date; I get these results: 10101 John Gray 30-Dec-1999 Hoola Hoop 14.75 10101 John Gray 02-Jan-2000 Lantern 16.00 10101 John Gray 01-Jul-1999 Life Vest 125.00 10101 John Gray 30-Jun-1999 Raft 58.00 10101 John Gray 18-Aug-1999 Rain Coat 18.30 10101 John Gray 08-Mar-2000 Sleeping Bag 88.70 10298 Leroy Brown 01-Apr-2000 Ear Muffs 12.50 10298 Leroy Brown 01-Dec-1999 Helmet 22.00 10298 Leroy Brown 19-Sep-1999 Lantern 29.00 10298 Leroy Brown 18-Mar-2000 Pocket Knife 22.38 10298 Leroy Brown 01-Jul-1999 Skateboard 33.00 10299 Elroy Keller 18-Jan-2000 Inflatable Mattress 38.00 10299 Elroy Keller 06-Jul-1999 Parachute 1250.00 10315 Lisa Jones 02-Feb-2000 Compass 8.00 10330 Shawn Dalton 01-Jan-2000 Flashlight 28.00 10330 Shawn Dalton 30-Jun-1999 Pogo stick 28.00 10330 Shawn Dalton 19-Apr-2000 Shovel 16.75 10339 Anthony Sanchez 27-Jul-1999 Umbrella 4.50 10410 Mary Ann Howell 28-Oct-1999 Sleeping Bag 89.22 10410 Mary Ann Howell 30-Jan-2000 Unicycle 192.50 10413 Donald Davids 19-Jan-2000 Lawnchair 32.00 10438 Kevin Smith 02-Nov-1999 Pillow 8.50 10438 Kevin Smith 18-Jan-2000 Tent 79.99 10438 Kevin Smith 01-Nov-1999 Umbrella 6.75 10439 Conrad Giles 14-Aug-1999 Ski Poles 25.50 10439 Conrad Giles 18-Sep-1999 Tent 88.00 10449 Isabela Moore 15-Dec-1999 Bicycle 380.50 10449 Isabela Moore 22-Dec-1999 Canoe 280.00 10449 Isabela Moore 19-Mar-2000 Canoe paddle 40.00 10449 Isabela Moore 29-Feb-2000 Flashlight 4.50 10449 Isabela Moore 01-Sep-1999 Snow Shoes 45.00 10449 Isabela Moore 13-Aug-1999 Unicycle 180.79 I'm not sure what the order_by is doing here and if it's having the intended effects.

    Read the article

  • Cancel UITouch Events When View Covered By Modal UIViewController

    - by kkrizka
    Hi there, I am writing an application where the user has to move some stuff on the screen using his fingers and drop them. To do this, I am using the touchesBegan,touchesEnded... function of each view that has to be moved. The problem is that sometimes the views are covered by a view displayed using the [UIViewController presentModalViewController] function. As soon as that happens, the UIView that I was moving stops receiving the touch events, since it was covered up. But there is no event telling me that it stopped receiving the events, so I can reset the state of the moved view. The following is an example that demonstrates this. The functions are part of a UIView that is being shown in the main window. It listens to touch events and when I drag the finger for some distance, it presents a modal view that covers everything. In the Run Log, it prints what touch events are received. - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { NSLog(@"touchesBegan"); touchStart=[[touches anyObject] locationInView:self]; } - (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event { CGPoint touchAt=[[touches anyObject] locationInView:self]; float xx=(touchAt.x-touchStart.x)*(touchAt.x-touchStart.x); float yy=(touchAt.y-touchStart.y)*(touchAt.y-touchStart.y); float rr=xx+yy; NSLog(@"touchesMoved %f",rr); if(rr > 100) { NSLog(@"Show modal"); [viewController presentModalViewController:[UIViewController new] animated:NO]; } } - (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event { NSLog(@"touchesEnded"); } - (void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event { NSLog(@"touchesCancelled"); } But when I test the application and trigger the modal dialog to be displayed, the following is the output in the Run Log. [Session started at 2010-03-27 16:17:14 -0700.] 2010-03-27 16:17:18.831 modelTouchCancel[2594:207] touchesBegan 2010-03-27 16:17:19.485 modelTouchCancel[2594:207] touchesMoved 2.000000 2010-03-27 16:17:19.504 modelTouchCancel[2594:207] touchesMoved 4.000000 2010-03-27 16:17:19.523 modelTouchCancel[2594:207] touchesMoved 16.000000 2010-03-27 16:17:19.538 modelTouchCancel[2594:207] touchesMoved 26.000000 2010-03-27 16:17:19.596 modelTouchCancel[2594:207] touchesMoved 68.000000 2010-03-27 16:17:19.624 modelTouchCancel[2594:207] touchesMoved 85.000000 2010-03-27 16:17:19.640 modelTouchCancel[2594:207] touchesMoved 125.000000 2010-03-27 16:17:19.641 modelTouchCancel[2594:207] Show modal Any suggestions on how to reset the state of a UIView when its touch events are interrupted by a modal view?

    Read the article

  • Set primary key with two integers

    - by user299196
    I have a table with primary key (ColumnA, ColumnB). I want to make a function or procedure that when passed two integers will insert a row into the table but make sure the largest integer always goes into ColumnA and the smaller one into ColumnB. So if we have SetKeysWithTheseNumbers(17, 19) would return |-----------------| |ColumnA | ColumnB| |-----------------| |19 | 17 | |-----------------| SetKeysWithTheseNumbers(19, 17) would return the same thing |-----------------| |ColumnA | ColumnB| |-----------------| |19 | 17 | |-----------------|

    Read the article

  • select attribute mysql

    - by Viet Tran
    I have there mysql table: **product (id,name)** 1 Samsung 2 Toshiba 3 Sony **attribute (id,name,parentid)** 1 Size 0 2 19" 1 3 17" 1 4 15" 1 5 Color 0 6 White 1 7 Black 2 **attribute2product (id,productid,attributeid)** 1 1 2 2 1 6 3 2 2 4 2 7 5 3 3 6 3 7 And listed them like: Size -- 19" (2 product(s)) -- 17" (1 product) -- 15" (0 product) Color -- White (1 product) -- Black (2 product(s)) Please help me to filter product, eg: when I choose the Size 19" (that product id 1 and 2 have), this will display: Size -- 19" Color -- White (1 product) -- Black (1 product) Thanks,

    Read the article

  • Upload File to Windows Azure Blob in Chunks through ASP.NET MVC, JavaScript and HTML5

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/07/01/upload-file-to-windows-azure-blob-in-chunks-through-asp.net.aspxMany people are using Windows Azure Blob Storage to store their data in the cloud. Blob storage provides 99.9% availability with easy-to-use API through .NET SDK and HTTP REST. For example, we can store JavaScript files, images, documents in blob storage when we are building an ASP.NET web application on a Web Role in Windows Azure. Or we can store our VHD files in blob and mount it as a hard drive in our cloud service. If you are familiar with Windows Azure, you should know that there are two kinds of blob: page blob and block blob. The page blob is optimized for random read and write, which is very useful when you need to store VHD files. The block blob is optimized for sequential/chunk read and write, which has more common usage. Since we can upload block blob in blocks through BlockBlob.PutBlock, and them commit them as a whole blob with invoking the BlockBlob.PutBlockList, it is very powerful to upload large files, as we can upload blocks in parallel, and provide pause-resume feature. There are many documents, articles and blog posts described on how to upload a block blob. Most of them are focus on the server side, which means when you had received a big file, stream or binaries, how to upload them into blob storage in blocks through .NET SDK.  But the problem is, how can we upload these large files from client side, for example, a browser. This questioned to me when I was working with a Chinese customer to help them build a network disk production on top of azure. The end users upload their files from the web portal, and then the files will be stored in blob storage from the Web Role. My goal is to find the best way to transform the file from client (end user’s machine) to the server (Web Role) through browser. In this post I will demonstrate and describe what I had done, to upload large file in chunks with high speed, and save them as blocks into Windows Azure Blob Storage.   Traditional Upload, Works with Limitation The simplest way to implement this requirement is to create a web page with a form that contains a file input element and a submit button. 1: @using (Html.BeginForm("About", "Index", FormMethod.Post, new { enctype = "multipart/form-data" })) 2: { 3: <input type="file" name="file" /> 4: <input type="submit" value="upload" /> 5: } And then in the backend controller, we retrieve the whole content of this file and upload it in to the blob storage through .NET SDK. We can split the file in blocks and upload them in parallel and commit. The code had been well blogged in the community. 1: [HttpPost] 2: public ActionResult About(HttpPostedFileBase file) 3: { 4: var container = _client.GetContainerReference("test"); 5: container.CreateIfNotExists(); 6: var blob = container.GetBlockBlobReference(file.FileName); 7: var blockDataList = new Dictionary<string, byte[]>(); 8: using (var stream = file.InputStream) 9: { 10: var blockSizeInKB = 1024; 11: var offset = 0; 12: var index = 0; 13: while (offset < stream.Length) 14: { 15: var readLength = Math.Min(1024 * blockSizeInKB, (int)stream.Length - offset); 16: var blockData = new byte[readLength]; 17: offset += stream.Read(blockData, 0, readLength); 18: blockDataList.Add(Convert.ToBase64String(BitConverter.GetBytes(index)), blockData); 19:  20: index++; 21: } 22: } 23:  24: Parallel.ForEach(blockDataList, (bi) => 25: { 26: blob.PutBlock(bi.Key, new MemoryStream(bi.Value), null); 27: }); 28: blob.PutBlockList(blockDataList.Select(b => b.Key).ToArray()); 29:  30: return RedirectToAction("About"); 31: } This works perfect if we selected an image, a music or a small video to upload. But if I selected a large file, let’s say a 6GB HD-movie, after upload for about few minutes the page will be shown as below and the upload will be terminated. In ASP.NET there is a limitation of request length and the maximized request length is defined in the web.config file. It’s a number which less than about 4GB. So if we want to upload a really big file, we cannot simply implement in this way. Also, in Windows Azure, a cloud service network load balancer will terminate the connection if exceed the timeout period. From my test the timeout looks like 2 - 3 minutes. Hence, when we need to upload a large file we cannot just use the basic HTML elements. Besides the limitation mentioned above, the simple HTML file upload cannot provide rich upload experience such as chunk upload, pause and pause-resume. So we need to find a better way to upload large file from the client to the server.   Upload in Chunks through HTML5 and JavaScript In order to break those limitation mentioned above we will try to upload the large file in chunks. This takes some benefit to us such as - No request size limitation: Since we upload in chunks, we can define the request size for each chunks regardless how big the entire file is. - No timeout problem: The size of chunks are controlled by us, which means we should be able to make sure request for each chunk upload will not exceed the timeout period of both ASP.NET and Windows Azure load balancer. It was a big challenge to upload big file in chunks until we have HTML5. There are some new features and improvements introduced in HTML5 and we will use them to implement our solution.   In HTML5, the File interface had been improved with a new method called “slice”. It can be used to read part of the file by specifying the start byte index and the end byte index. For example if the entire file was 1024 bytes, file.slice(512, 768) will read the part of this file from the 512nd byte to 768th byte, and return a new object of interface called "Blob”, which you can treat as an array of bytes. In fact,  a Blob object represents a file-like object of immutable, raw data. The File interface is based on Blob, inheriting blob functionality and expanding it to support files on the user's system. For more information about the Blob please refer here. File and Blob is very useful to implement the chunk upload. We will use File interface to represent the file the user selected from the browser and then use File.slice to read the file in chunks in the size we wanted. For example, if we wanted to upload a 10MB file with 512KB chunks, then we can read it in 512KB blobs by using File.slice in a loop.   Assuming we have a web page as below. User can select a file, an input box to specify the block size in KB and a button to start upload. 1: <div> 2: <input type="file" id="upload_files" name="files[]" /><br /> 3: Block Size: <input type="number" id="block_size" value="512" name="block_size" />KB<br /> 4: <input type="button" id="upload_button_blob" name="upload" value="upload (blob)" /> 5: </div> Then we can have the JavaScript function to upload the file in chunks when user clicked the button. 1: <script type="text/javascript"> 1: 2: $(function () { 3: $("#upload_button_blob").click(function () { 4: }); 5: });</script> Firstly we need to ensure the client browser supports the interfaces we are going to use. Just try to invoke the File, Blob and FormData from the “window” object. If any of them is “undefined” the condition result will be “false” which means your browser doesn’t support these premium feature and it’s time for you to get your browser updated. FormData is another new feature we are going to use in the future. It could generate a temporary form for us. We will use this interface to create a form with chunk and associated metadata when invoked the service through ajax. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: if (window.File && window.Blob && window.FormData) { 4: alert("Your brwoser is awesome, let's rock!"); 5: } 6: else { 7: alert("Oh man plz update to a modern browser before try is cool stuff out."); 8: return; 9: } 10: }); Each browser supports these interfaces by their own implementation and currently the Blob, File and File.slice are supported by Chrome 21, FireFox 13, IE 10, Opera 12 and Safari 5.1 or higher. After that we worked on the files the user selected one by one since in HTML5, user can select multiple files in one file input box. 1: var files = $("#upload_files")[0].files; 2: for (var i = 0; i < files.length; i++) { 3: var file = files[i]; 4: var fileSize = file.size; 5: var fileName = file.name; 6: } Next, we calculated the start index and end index for each chunks based on the size the user specified from the browser. We put them into an array with the file name and the index, which will be used when we upload chunks into Windows Azure Blob Storage as blocks since we need to specify the target blob name and the block index. At the same time we will store the list of all indexes into another variant which will be used to commit blocks into blob in Azure Storage once all chunks had been uploaded successfully. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10:  11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: var blockSizeInKB = $("#block_size").val(); 14: var blockSize = blockSizeInKB * 1024; 15: var blocks = []; 16: var offset = 0; 17: var index = 0; 18: var list = ""; 19: while (offset < fileSize) { 20: var start = offset; 21: var end = Math.min(offset + blockSize, fileSize); 22:  23: blocks.push({ 24: name: fileName, 25: index: index, 26: start: start, 27: end: end 28: }); 29: list += index + ","; 30:  31: offset = end; 32: index++; 33: } 34: } 35: }); Now we have all chunks’ information ready. The next step should be upload them one by one to the server side, and at the server side when received a chunk it will upload as a block into Blob Storage, and finally commit them with the index list through BlockBlobClient.PutBlockList. But since all these invokes are ajax calling, which means not synchronized call. So we need to introduce a new JavaScript library to help us coordinate the asynchronize operation, which named “async.js”. You can download this JavaScript library here, and you can find the document here. I will not explain this library too much in this post. We will put all procedures we want to execute as a function array, and pass into the proper function defined in async.js to let it help us to control the execution sequence, in series or in parallel. Hence we will define an array and put the function for chunk upload into this array. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4:  5: // start to upload each files in chunks 6: var files = $("#upload_files")[0].files; 7: for (var i = 0; i < files.length; i++) { 8: var file = files[i]; 9: var fileSize = file.size; 10: var fileName = file.name; 11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: ... ... 14:  15: // define the function array and push all chunk upload operation into this array 16: blocks.forEach(function (block) { 17: putBlocks.push(function (callback) { 18: }); 19: }); 20: } 21: }); 22: }); As you can see, I used File.slice method to read each chunks based on the start and end byte index we calculated previously, and constructed a temporary HTML form with the file name, chunk index and chunk data through another new feature in HTML5 named FormData. Then post this form to the backend server through jQuery.ajax. This is the key part of our solution. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: blocks.forEach(function (block) { 15: putBlocks.push(function (callback) { 16: // load blob based on the start and end index for each chunks 17: var blob = file.slice(block.start, block.end); 18: // put the file name, index and blob into a temporary from 19: var fd = new FormData(); 20: fd.append("name", block.name); 21: fd.append("index", block.index); 22: fd.append("file", blob); 23: // post the form to backend service (asp.net mvc controller action) 24: $.ajax({ 25: url: "/Home/UploadInFormData", 26: data: fd, 27: processData: false, 28: contentType: "multipart/form-data", 29: type: "POST", 30: success: function (result) { 31: if (!result.success) { 32: alert(result.error); 33: } 34: callback(null, block.index); 35: } 36: }); 37: }); 38: }); 39: } 40: }); Then we will invoke these functions one by one by using the async.js. And once all functions had been executed successfully I invoked another ajax call to the backend service to commit all these chunks (blocks) as the blob in Windows Azure Storage. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.series(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); That’s all in the client side. The outline of our logic would be - Calculate the start and end byte index for each chunks based on the block size. - Defined the functions of reading the chunk form file and upload the content to the backend service through ajax. - Execute the functions defined in previous step with “async.js”. - Commit the chunks by invoking the backend service in Windows Azure Storage finally.   Save Chunks as Blocks into Blob Storage In above we finished the client size JavaScript code. It uploaded the file in chunks to the backend service which we are going to implement in this step. We will use ASP.NET MVC as our backend service, and it will receive the chunks, upload into Windows Azure Bob Storage in blocks, then finally commit as one blob. As in the client side we uploaded chunks by invoking the ajax call to the URL "/Home/UploadInFormData", I created a new action under the Index controller and it only accepts HTTP POST request. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: } 8: catch (Exception e) 9: { 10: error = e.ToString(); 11: } 12:  13: return new JsonResult() 14: { 15: Data = new 16: { 17: success = string.IsNullOrWhiteSpace(error), 18: error = error 19: } 20: }; 21: } Then I retrieved the file name, index and the chunk content from the Request.Form object, which was passed from our client side. And then, used the Windows Azure SDK to create a blob container (in this case we will use the container named “test”.) and create a blob reference with the blob name (same as the file name). Then uploaded the chunk as a block of this blob with the index, since in Blob Storage each block must have an index (ID) associated with so that finally we can put all blocks as one blob by specifying their block ID list. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var index = int.Parse(Request.Form["index"]); 9: var file = Request.Files[0]; 10: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 11:  12: var container = _client.GetContainerReference("test"); 13: container.CreateIfNotExists(); 14: var blob = container.GetBlockBlobReference(name); 15: blob.PutBlock(id, file.InputStream, null); 16: } 17: catch (Exception e) 18: { 19: error = e.ToString(); 20: } 21:  22: return new JsonResult() 23: { 24: Data = new 25: { 26: success = string.IsNullOrWhiteSpace(error), 27: error = error 28: } 29: }; 30: } Next, I created another action to commit the blocks into blob once all chunks had been uploaded. Similarly, I retrieved the blob name from the Request.Form. I also retrieved the chunks ID list, which is the block ID list from the Request.Form in a string format, split them as a list, then invoked the BlockBlob.PutBlockList method. After that our blob will be shown in the container and ready to be download. 1: [HttpPost] 2: public JsonResult Commit() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var list = Request.Form["list"]; 9: var ids = list 10: .Split(',') 11: .Where(id => !string.IsNullOrWhiteSpace(id)) 12: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 13: .ToArray(); 14:  15: var container = _client.GetContainerReference("test"); 16: container.CreateIfNotExists(); 17: var blob = container.GetBlockBlobReference(name); 18: blob.PutBlockList(ids); 19: } 20: catch (Exception e) 21: { 22: error = e.ToString(); 23: } 24:  25: return new JsonResult() 26: { 27: Data = new 28: { 29: success = string.IsNullOrWhiteSpace(error), 30: error = error 31: } 32: }; 33: } Now we finished all code we need. The whole process of uploading would be like this below. Below is the full client side JavaScript code. 1: <script type="text/javascript" src="~/Scripts/async.js"></script> 2: <script type="text/javascript"> 3: $(function () { 4: $("#upload_button_blob").click(function () { 5: // assert the browser support html5 6: if (window.File && window.Blob && window.FormData) { 7: alert("Your brwoser is awesome, let's rock!"); 8: } 9: else { 10: alert("Oh man plz update to a modern browser before try is cool stuff out."); 11: return; 12: } 13:  14: // start to upload each files in chunks 15: var files = $("#upload_files")[0].files; 16: for (var i = 0; i < files.length; i++) { 17: var file = files[i]; 18: var fileSize = file.size; 19: var fileName = file.name; 20:  21: // calculate the start and end byte index for each blocks(chunks) 22: // with the index, file name and index list for future using 23: var blockSizeInKB = $("#block_size").val(); 24: var blockSize = blockSizeInKB * 1024; 25: var blocks = []; 26: var offset = 0; 27: var index = 0; 28: var list = ""; 29: while (offset < fileSize) { 30: var start = offset; 31: var end = Math.min(offset + blockSize, fileSize); 32:  33: blocks.push({ 34: name: fileName, 35: index: index, 36: start: start, 37: end: end 38: }); 39: list += index + ","; 40:  41: offset = end; 42: index++; 43: } 44:  45: // define the function array and push all chunk upload operation into this array 46: var putBlocks = []; 47: blocks.forEach(function (block) { 48: putBlocks.push(function (callback) { 49: // load blob based on the start and end index for each chunks 50: var blob = file.slice(block.start, block.end); 51: // put the file name, index and blob into a temporary from 52: var fd = new FormData(); 53: fd.append("name", block.name); 54: fd.append("index", block.index); 55: fd.append("file", blob); 56: // post the form to backend service (asp.net mvc controller action) 57: $.ajax({ 58: url: "/Home/UploadInFormData", 59: data: fd, 60: processData: false, 61: contentType: "multipart/form-data", 62: type: "POST", 63: success: function (result) { 64: if (!result.success) { 65: alert(result.error); 66: } 67: callback(null, block.index); 68: } 69: }); 70: }); 71: }); 72:  73: // invoke the functions one by one 74: // then invoke the commit ajax call to put blocks into blob in azure storage 75: async.series(putBlocks, function (error, result) { 76: var data = { 77: name: fileName, 78: list: list 79: }; 80: $.post("/Home/Commit", data, function (result) { 81: if (!result.success) { 82: alert(result.error); 83: } 84: else { 85: alert("done!"); 86: } 87: }); 88: }); 89: } 90: }); 91: }); 92: </script> And below is the full ASP.NET MVC controller code. 1: public class HomeController : Controller 2: { 3: private CloudStorageAccount _account; 4: private CloudBlobClient _client; 5:  6: public HomeController() 7: : base() 8: { 9: _account = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("DataConnectionString")); 10: _client = _account.CreateCloudBlobClient(); 11: } 12:  13: public ActionResult Index() 14: { 15: ViewBag.Message = "Modify this template to jump-start your ASP.NET MVC application."; 16:  17: return View(); 18: } 19:  20: [HttpPost] 21: public JsonResult UploadInFormData() 22: { 23: var error = string.Empty; 24: try 25: { 26: var name = Request.Form["name"]; 27: var index = int.Parse(Request.Form["index"]); 28: var file = Request.Files[0]; 29: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 30:  31: var container = _client.GetContainerReference("test"); 32: container.CreateIfNotExists(); 33: var blob = container.GetBlockBlobReference(name); 34: blob.PutBlock(id, file.InputStream, null); 35: } 36: catch (Exception e) 37: { 38: error = e.ToString(); 39: } 40:  41: return new JsonResult() 42: { 43: Data = new 44: { 45: success = string.IsNullOrWhiteSpace(error), 46: error = error 47: } 48: }; 49: } 50:  51: [HttpPost] 52: public JsonResult Commit() 53: { 54: var error = string.Empty; 55: try 56: { 57: var name = Request.Form["name"]; 58: var list = Request.Form["list"]; 59: var ids = list 60: .Split(',') 61: .Where(id => !string.IsNullOrWhiteSpace(id)) 62: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 63: .ToArray(); 64:  65: var container = _client.GetContainerReference("test"); 66: container.CreateIfNotExists(); 67: var blob = container.GetBlockBlobReference(name); 68: blob.PutBlockList(ids); 69: } 70: catch (Exception e) 71: { 72: error = e.ToString(); 73: } 74:  75: return new JsonResult() 76: { 77: Data = new 78: { 79: success = string.IsNullOrWhiteSpace(error), 80: error = error 81: } 82: }; 83: } 84: } And if we selected a file from the browser we will see our application will upload chunks in the size we specified to the server through ajax call in background, and then commit all chunks in one blob. Then we can find the blob in our Windows Azure Blob Storage.   Optimized by Parallel Upload In previous example we just uploaded our file in chunks. This solved the problem that ASP.NET MVC request content size limitation as well as the Windows Azure load balancer timeout. But it might introduce the performance problem since we uploaded chunks in sequence. In order to improve the upload performance we could modify our client side code a bit to make the upload operation invoked in parallel. The good news is that, “async.js” library provides the parallel execution function. If you remembered the code we invoke the service to upload chunks, it utilized “async.series” which means all functions will be executed in sequence. Now we will change this code to “async.parallel”. This will invoke all functions in parallel. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallel(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); In this way all chunks will be uploaded to the server side at the same time to maximize the bandwidth usage. This should work if the file was not very large and the chunk size was not very small. But for large file this might introduce another problem that too many ajax calls are sent to the server at the same time. So the best solution should be, upload the chunks in parallel with maximum concurrency limitation. The code below specified the concurrency limitation to 4, which means at the most only 4 ajax calls could be invoked at the same time. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallelLimit(putBlocks, 4, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: });   Summary In this post we discussed how to upload files in chunks to the backend service and then upload them into Windows Azure Blob Storage in blocks. We focused on the frontend side and leverage three new feature introduced in HTML 5 which are - File.slice: Read part of the file by specifying the start and end byte index. - Blob: File-like interface which contains the part of the file content. - FormData: Temporary form element that we can pass the chunk alone with some metadata to the backend service. Then we discussed the performance consideration of chunk uploading. Sequence upload cannot provide maximized upload speed, but the unlimited parallel upload might crash the browser and server if too many chunks. So we finally came up with the solution to upload chunks in parallel with the concurrency limitation. We also demonstrated how to utilize “async.js” JavaScript library to help us control the asynchronize call and the parallel limitation.   Regarding the chunk size and the parallel limitation value there is no “best” value. You need to test vary composition and find out the best one for your particular scenario. It depends on the local bandwidth, client machine cores and the server side (Windows Azure Cloud Service Virtual Machine) cores, memory and bandwidth. Below is one of my performance test result. The client machine was Windows 8 IE 10 with 4 cores. I was using Microsoft Cooperation Network. The web site was hosted on Windows Azure China North data center (in Beijing) with one small web role (1.7GB 1 core CPU, 1.75GB memory with 100Mbps bandwidth). The test cases were - Chunk size: 512KB, 1MB, 2MB, 4MB. - Upload Mode: Sequence, parallel (unlimited), parallel with limit (4 threads, 8 threads). - Chunk Format: base64 string, binaries. - Target file: 100MB. - Each case was tested 3 times. Below is the test result chart. Some thoughts, but not guidance or best practice: - Parallel gets better performance than series. - No significant performance improvement between parallel 4 threads and 8 threads. - Transform with binaries provides better performance than base64. - In all cases, chunk size in 1MB - 2MB gets better performance.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Disk operations freeze Debian

    - by Grzenio
    Hi, I have just installed Debian testing on my new desktop and I am not very happy with performance - when I perform a disk intensive operation, e.g. upgrade packages in the system, everything seems to freeze, e.g. changing tabs in Iceweasel takes 3 seconds. I run the Debian on my 3 year old Thinkpad X60 ultra-portable, and I don't have these issues. (every single parameter of the laptop is much worse than the desktop). I am using the default packaged kernel and scripts. I run hdparm -t /dev/sda1 And I got around 96GB/s, which is expected. What else can I try to make it work better? EDIT: grzes:/home/ga# hdparm -i /dev/sda /dev/sda: Model=WDC WD15EARS-00Z5B1, FwRev=80.00A80, SerialNo=WD-WMAVU1362357 Config={ HardSect NotMFM HdSw>15uSec SpinMotCtl Fixed DTR>5Mbs FmtGapReq } RawCHS=16383/16/63, TrkSize=0, SectSize=0, ECCbytes=50 BuffType=unknown, BuffSize=unknown, MaxMultSect=16, MultSect=16 CurCHS=16383/16/63, CurSects=16514064, LBA=yes, LBAsects=2930277168 IORDY=on/off, tPIO={min:120,w/IORDY:120}, tDMA={min:120,rec:120} PIO modes: pio0 pio3 pio4 DMA modes: mdma0 mdma1 mdma2 UDMA modes: udma0 udma1 udma2 udma3 udma4 udma5 *udma6 AdvancedPM=no WriteCache=enabled Drive conforms to: Unspecified: ATA/ATAPI-1,2,3,4,5,6,7 * signifies the current active mode EDIT2: Even my wife said "on this new computer I can't do anything when I copy the photos from the camera and its much worse than on the old one". So it must be serious. EDIT3: Updated to 2.6.32, but still no improvement EDIT4: I forgot to mention that the new disk is ext4, the old was ext3. EDIT5: Still not solved. I have a P43 ASUS P5QL-E board. Lines from dmesg that seem relevant: [ 0.370850] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 253) [ 0.370852] io scheduler noop registered [ 0.370853] io scheduler anticipatory registered [ 0.370854] io scheduler deadline registered [ 0.370876] io scheduler cfq registered (default) ... [ 0.908233] ata_piix 0000:00:1f.2: version 2.13 [ 0.908243] ata_piix 0000:00:1f.2: PCI INT B -> GSI 19 (level, low) -> IRQ 19 [ 0.908246] ata_piix 0000:00:1f.2: MAP [ P0 P2 P1 P3 ] [ 0.908275] ata_piix 0000:00:1f.2: setting latency timer to 64 [ 0.908316] scsi0 : ata_piix [ 0.908374] scsi1 : ata_piix [ 0.909180] ata1: SATA max UDMA/133 cmd 0xa000 ctl 0x9c00 bmdma 0x9480 irq 19 [ 0.909183] ata2: SATA max UDMA/133 cmd 0x9880 ctl 0x9800 bmdma 0x9488 irq 19 [ 0.909199] ata_piix 0000:00:1f.5: PCI INT B -> GSI 19 (level, low) -> IRQ 19 [ 0.909202] ata_piix 0000:00:1f.5: MAP [ P0 -- P1 -- ] [ 0.909228] ata_piix 0000:00:1f.5: setting latency timer to 64 [ 0.909279] scsi2 : ata_piix [ 0.909326] scsi3 : ata_piix [ 0.910021] ata3: SATA max UDMA/133 cmd 0xb000 ctl 0xac00 bmdma 0xa480 irq 19 [ 0.910024] ata4: SATA max UDMA/133 cmd 0xa880 ctl 0xa800 bmdma 0xa488 irq 19 [ 0.915575] FDC 0 is a post-1991 82077 ... [ 1.716062] ata1.00: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [ 1.716074] ata1.01: SATA link down (SStatus 0 SControl 300) [ 1.724318] ata1.00: ATA-8: WDC WD15EARS-00Z5B1, 80.00A80, max UDMA/133 [ 1.724322] ata1.00: 2930277168 sectors, multi 16: LBA48 NCQ (depth 0/32) [ 1.740339] ata1.00: configured for UDMA/133 [ 1.740428] scsi 0:0:0:0: Direct-Access ATA WDC WD15EARS-00Z 80.0 PQ: 0 ANSI: 5 [ 1.746788] scsi 6:0:0:0: CD-ROM ASUS DRW-1608P 1.17 PQ: 0 ANSI: 5 ... [ 1.925981] sd 0:0:0:0: [sda] 2930277168 512-byte logical blocks: (1.50 TB/1.36 TiB) [ 1.926005] sd 0:0:0:0: [sda] Write Protect is off [ 1.926007] sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 [ 1.926020] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 1.926092] sda:sr0: scsi3-mmc drive: 40x/40x writer cd/rw xa/form2 cdda tray [ 1.931106] Uniform CD-ROM driver Revision: 3.20 [ 1.931191] sr 6:0:0:0: Attached scsi CD-ROM sr0 ... [ 1.941936] sda1 sda2 sda3 sda4 < sda5 sda6 > [ 1.967691] sd 0:0:0:0: [sda] Attached SCSI disk [ 1.970938] sd 0:0:0:0: Attached scsi generic sg0 type 0 [ 1.970959] sr 6:0:0:0: Attached scsi generic sg1 type 5 ... [ 2.500086] EXT4-fs (sda3): mounted filesystem with ordered data mode ... [ 7.150468] EXT4-fs (sda6): mounted filesystem with ordered data mode

    Read the article

  • How to call a new thread from button click

    - by Lynnooi
    Hi, I'm trying to call a thread on a button click (btn_more) but i cant get it right. The thread is to get some data and update the images. The problem i have is if i only update 4 or 5 images then it works fine. But if i load more than 5 images i will get a force close. At times when the internet is slow I will face the same problem too. Can please help me to solve this problem or provide me some guidance? Here is the error i got from LogCat: 04-19 18:51:44.907: ERROR/AndroidRuntime(1034): Uncaught handler: thread main exiting due to uncaught exception 04-19 18:51:44.927: ERROR/AndroidRuntime(1034): java.lang.NullPointerException 04-19 18:51:44.927: ERROR/AndroidRuntime(1034): at mobile9.android.gallery.GalleryWallpapers.setWallpaperThumb(GalleryWallpapers.java:383) 04-19 18:51:44.927: ERROR/AndroidRuntime(1034): at mobile9.android.gallery.GalleryWallpapers.access$4(GalleryWallpapers.java:320) 04-19 18:51:44.927: ERROR/AndroidRuntime(1034): at mobile9.android.gallery.GalleryWallpapers$1.handleMessage(GalleryWallpapers.java:266) 04-19 18:51:44.927: ERROR/AndroidRuntime(1034): at android.os.Handler.dispatchMessage(Handler.java:99) 04-19 18:51:44.927: ERROR/AndroidRuntime(1034): at android.os.Looper.loop(Looper.java:123) 04-19 18:51:44.927: ERROR/AndroidRuntime(1034): at android.app.ActivityThread.main(ActivityThread.java:4310) 04-19 18:51:44.927: ERROR/AndroidRuntime(1034): at java.lang.reflect.Method.invokeNative(Native Method) 04-19 18:51:44.927: ERROR/AndroidRuntime(1034): at java.lang.reflect.Method.invoke(Method.java:521) 04-19 18:51:44.927: ERROR/AndroidRuntime(1034): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:860) 04-19 18:51:44.927: ERROR/AndroidRuntime(1034): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:618) 04-19 18:51:44.927: ERROR/AndroidRuntime(1034): at dalvik.system.NativeStart.main(Native Method) My Code: public class GalleryWallpapers extends Activity implements Runnable { public static String MODEL = android.os.Build.MODEL ; private static final String rootURL = "http://www.uploadhub.com/mobile9/gallery/c/"; private int wallpapers_count = 0; private int ringtones_count = 0; private int index = 0; private int folder_id; private int page; private int page_counter = 1; private String family; private String keyword; private String xmlURL = ""; private String thread_op = "xml"; private ImageButton btn_back; private ImageButton btn_home; private ImageButton btn_filter; private ImageButton btn_search; private TextView btn_more; private ProgressDialog pd; GalleryExampleHandler myExampleHandler = new GalleryExampleHandler(); Context context = GalleryWallpapers.this.getBaseContext(); Drawable image; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); MODEL = "HTC Legend"; // **needs to be remove after testing** try { MODEL = URLEncoder.encode(MODEL,"UTF-8"); } catch (UnsupportedEncodingException e) { // TODO Auto-generated catch block e.printStackTrace(); } requestWindowFeature(Window.FEATURE_NO_TITLE); setContentView(R.layout.gallerywallpapers); Bundle b = this.getIntent().getExtras(); family = b.getString("fm").trim(); folder_id = Integer.parseInt(b.getString("fi")); keyword = b.getString("kw").trim(); page = Integer.parseInt(b.getString("page").trim()); WindowManager w = getWindowManager(); Display d = w.getDefaultDisplay(); final int width = d.getWidth(); final int height = d.getHeight(); xmlURL = rootURL + "wallpapers/1/?output=rss&afm=wallpapers&mdl=" + MODEL + "&awd=" + width + "&aht=" + height; if (folder_id > 0) { xmlURL = xmlURL + "&fi=" + folder_id; } pd = ProgressDialog.show(GalleryWallpapers.this, "", "Loading...", true, false); Thread thread = new Thread(GalleryWallpapers.this); thread.start(); btn_more = (TextView) findViewById(R.id.btn_more); btn_more.setOnClickListener(new View.OnClickListener() { public void onClick(View view) { myExampleHandler.filenames.clear(); myExampleHandler.authors.clear(); myExampleHandler.duration.clear(); myExampleHandler.fileid.clear(); btn_more.setBackgroundResource(R.drawable.btn_more_click); page = page + 1; thread_op = "xml"; xmlURL = rootURL + "wallpapers/1/?output=rss&afm=wallpapers&mdl=" + MODEL + "&awd=" + width + "&aht=" + height; xmlURL = xmlURL + "&pg2=" + page; index = 0; pd = ProgressDialog.show(GalleryWallpapers.this, "", "Loading...", true, false); Thread thread = new Thread(GalleryWallpapers.this); thread.start(); } }); } public void run() { if(thread_op.equalsIgnoreCase("xml")){ readXML(); } else if(thread_op.equalsIgnoreCase("getImg")){ getWallpaperThumb(); } handler.sendEmptyMessage(0); } private Handler handler = new Handler() { @Override public void handleMessage(Message msg) { int count = 0; if (!myExampleHandler.filenames.isEmpty()){ count = myExampleHandler.filenames.size(); } count = 6; if(thread_op.equalsIgnoreCase("xml")){ pd.dismiss(); thread_op = "getImg"; btn_more.setBackgroundResource(R.drawable.btn_more); } else if(thread_op.equalsIgnoreCase("getImg")){ setWallpaperThumb(); index++; if (index < count){ Thread thread = new Thread(GalleryWallpapers.this); thread.start(); } } } }; private void readXML(){ if (xmlURL.length() != 0) { try { /* Create a URL we want to load some xml-data from. */ URL url = new URL(xmlURL); /* Get a SAXParser from the SAXPArserFactory. */ SAXParserFactory spf = SAXParserFactory.newInstance(); SAXParser sp = spf.newSAXParser(); /* Get the XMLReader of the SAXParser we created. */ XMLReader xr = sp.getXMLReader(); /* * Create a new ContentHandler and apply it to the * XML-Reader */ xr.setContentHandler(myExampleHandler); /* Parse the xml-data from our URL. */ xr.parse(new InputSource(url.openStream())); /* Parsing has finished. */ /* * Our ExampleHandler now provides the parsed data to * us. */ ParsedExampleDataSet parsedExampleDataSet = myExampleHandler .getParsedData(); } catch (Exception e) { //showDialog(DIALOG_SEND_LOG); } } } private void getWallpaperThumb(){ int i = this.index; if (!myExampleHandler.filenames.elementAt(i).toString().equalsIgnoreCase("")){ image = ImageOperations(context, myExampleHandler.thumbs.elementAt(i).toString(), "image.jpg"); } } private void setWallpaperThumb(){ int i = this.index; if (myExampleHandler.filenames.elementAt(i).toString() != null) { String file_info = myExampleHandler.filenames.elementAt(i).toString(); String author = "\nby " + myExampleHandler.authors.elementAt(i).toString(); final String folder = myExampleHandler.folder_id.elementAt(folder_id).toString(); final String fid = myExampleHandler.fileid.elementAt(i).toString(); ImageView imgView = new ImageView(context); TextView tv_filename = null; TextView tv_author = null; switch (i + 1) { case 1: imgView = (ImageView) findViewById(R.id.image1); tv_filename = (TextView) findViewById(R.id.filename1); tv_author = (TextView) findViewById(R.id.author1); break; case 2: imgView = (ImageView) findViewById(R.id.image2); tv_filename = (TextView) findViewById(R.id.filename2); tv_author = (TextView) findViewById(R.id.author2); break; case 3: imgView = (ImageView) findViewById(R.id.image3); tv_filename = (TextView) findViewById(R.id.filename3); tv_author = (TextView) findViewById(R.id.author3); break; case 4: . . . . . case 10: imgView = (ImageView) findViewById(R.id.image10); tv_filename = (TextView) findViewById(R.id.filename10); tv_author = (TextView) findViewById(R.id.author10); break; } if (image.getIntrinsicHeight() > 0) { imgView.setImageDrawable(image); } else { imgView.setImageResource(R.drawable.default_wallpaper); } tv_filename.setText(file_info); tv_author.setText(author); imgView.setOnClickListener(new View.OnClickListener() { public void onClick(View view) { // Perform action on click } }); } } private Drawable ImageOperations(Context ctx, String url, String saveFilename) { try { InputStream is = (InputStream) this.fetch(url); Drawable d = Drawable.createFromStream(is, "src"); return d; } catch (MalformedURLException e) { e.printStackTrace(); return null; } catch (IOException e) { e.printStackTrace(); return null; } } }

    Read the article

  • How do I make Linux recognize a new SATA /dev/sda drive I hot swapped in without rebooting?

    - by Philip Durbin
    Hot swapping out a failed SATA /dev/sda drive worked fine, but when I went to swap in a new drive, it wasn't recognized: [root@fs-2 ~]# tail -18 /var/log/messages May 5 16:54:35 fs-2 kernel: ata1: exception Emask 0x10 SAct 0x0 SErr 0x50000 action 0xe frozen May 5 16:54:35 fs-2 kernel: ata1: SError: { PHYRdyChg CommWake } May 5 16:54:40 fs-2 kernel: ata1: link is slow to respond, please be patient (ready=0) May 5 16:54:45 fs-2 kernel: ata1: device not ready (errno=-16), forcing hardreset May 5 16:54:45 fs-2 kernel: ata1: soft resetting link May 5 16:54:50 fs-2 kernel: ata1: link is slow to respond, please be patient (ready=0) May 5 16:54:55 fs-2 kernel: ata1: SRST failed (errno=-16) May 5 16:54:55 fs-2 kernel: ata1: soft resetting link May 5 16:55:00 fs-2 kernel: ata1: link is slow to respond, please be patient (ready=0) May 5 16:55:05 fs-2 kernel: ata1: SRST failed (errno=-16) May 5 16:55:05 fs-2 kernel: ata1: soft resetting link May 5 16:55:10 fs-2 kernel: ata1: link is slow to respond, please be patient (ready=0) May 5 16:55:40 fs-2 kernel: ata1: SRST failed (errno=-16) May 5 16:55:40 fs-2 kernel: ata1: limiting SATA link speed to 1.5 Gbps May 5 16:55:40 fs-2 kernel: ata1: soft resetting link May 5 16:55:45 fs-2 kernel: ata1: SRST failed (errno=-16) May 5 16:55:45 fs-2 kernel: ata1: reset failed, giving up May 5 16:55:45 fs-2 kernel: ata1: EH complete I tried a couple things to make the server find the new /dev/sda, such as rescan-scsi-bus.sh but they didn't work: [root@fs-2 ~]# echo "---" > /sys/class/scsi_host/host0/scan -bash: echo: write error: Invalid argument [root@fs-2 ~]# [root@fs-2 ~]# /root/rescan-scsi-bus.sh -l [snip] 0 new device(s) found. 0 device(s) removed. [root@fs-2 ~]# [root@fs-2 ~]# ls /dev/sda ls: /dev/sda: No such file or directory I ended up rebooting the server. /dev/sda was recognized, I fixed the software RAID, and everything is fine now. But for next time, how can I make Linux recognize a new SATA drive I have hot swapped in without rebooting? The operating system in question is RHEL5.3: [root@fs-2 ~]# cat /etc/redhat-release Red Hat Enterprise Linux Server release 5.3 (Tikanga) The hard drive is a Seagate Barracuda ES.2 SATA 3.0-Gb/s 500-GB, model ST3500320NS. Here is the lscpi output: [root@fs-2 ~]# lspci 00:00.0 RAM memory: nVidia Corporation MCP55 Memory Controller (rev a2) 00:01.0 ISA bridge: nVidia Corporation MCP55 LPC Bridge (rev a3) 00:01.1 SMBus: nVidia Corporation MCP55 SMBus (rev a3) 00:02.0 USB Controller: nVidia Corporation MCP55 USB Controller (rev a1) 00:02.1 USB Controller: nVidia Corporation MCP55 USB Controller (rev a2) 00:04.0 IDE interface: nVidia Corporation MCP55 IDE (rev a1) 00:05.0 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:05.1 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:05.2 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:06.0 PCI bridge: nVidia Corporation MCP55 PCI bridge (rev a2) 00:08.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a3) 00:09.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a3) 00:0a.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0b.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0c.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0d.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0e.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0f.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:18.0 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] HyperTransport Technology Configuration 00:18.1 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Address Map 00:18.2 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] DRAM Controller 00:18.3 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Miscellaneous Control 00:19.0 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] HyperTransport Technology Configuration 00:19.1 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Address Map 00:19.2 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] DRAM Controller 00:19.3 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Miscellaneous Control 03:00.0 VGA compatible controller: Matrox Graphics, Inc. MGA G200e [Pilot] ServerEngines (SEP1) (rev 02) 04:00.0 PCI bridge: NEC Corporation uPD720400 PCI Express - PCI/PCI-X Bridge (rev 06) 04:00.1 PCI bridge: NEC Corporation uPD720400 PCI Express - PCI/PCI-X Bridge (rev 06) Update: In perhaps a dozen cases, we've been forced to reboot servers because hot swap hasn't "just worked." Thanks for the answers to look more into the SATA controller. I've included the lspci output for the problematic system above (hostname: fs-2). I could still use some help understanding what exactly isn't supported hardware-wise in terms of hot swap for that system. Please let me know what other output besides lspci might be useful. The good news is that hot swap "just worked" today on one of our servers (hostname: www-1), which is very rare for us. Here is the lspci output: [root@www-1 ~]# lspci 00:00.0 RAM memory: nVidia Corporation MCP55 Memory Controller (rev a2) 00:01.0 ISA bridge: nVidia Corporation MCP55 LPC Bridge (rev a3) 00:01.1 SMBus: nVidia Corporation MCP55 SMBus (rev a3) 00:02.0 USB Controller: nVidia Corporation MCP55 USB Controller (rev a1) 00:02.1 USB Controller: nVidia Corporation MCP55 USB Controller (rev a2) 00:04.0 IDE interface: nVidia Corporation MCP55 IDE (rev a1) 00:05.0 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:05.1 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:05.2 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:06.0 PCI bridge: nVidia Corporation MCP55 PCI bridge (rev a2) 00:08.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a3) 00:09.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a3) 00:0b.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0c.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0f.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:18.0 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] HyperTransport Configuration 00:18.1 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Address Map 00:18.2 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] DRAM Controller 00:18.3 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Miscellaneous Control 00:18.4 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Link Control 00:19.0 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] HyperTransport Configuration 00:19.1 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Address Map 00:19.2 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] DRAM Controller 00:19.3 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Miscellaneous Control 00:19.4 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Link Control 03:00.0 VGA compatible controller: Matrox Graphics, Inc. MGA G200e [Pilot] ServerEngines (SEP1) (rev 02) 04:00.0 PCI bridge: NEC Corporation uPD720400 PCI Express - PCI/PCI-X Bridge (rev 06) 04:00.1 PCI bridge: NEC Corporation uPD720400 PCI Express - PCI/PCI-X Bridge (rev 06) 09:00.0 SCSI storage controller: LSI Logic / Symbios Logic SAS1064ET PCI-Express Fusion-MPT SAS (rev 04)

    Read the article

  • How do I make Linux recognize a new SATA /dev/sda drive I hot swapped in without rebooting?

    - by Philip Durbin
    Hot swapping out a failed SATA /dev/sda drive worked fine, but when I went to swap in a new drive, it wasn't recognized: [root@fs-2 ~]# tail -18 /var/log/messages May 5 16:54:35 fs-2 kernel: ata1: exception Emask 0x10 SAct 0x0 SErr 0x50000 action 0xe frozen May 5 16:54:35 fs-2 kernel: ata1: SError: { PHYRdyChg CommWake } May 5 16:54:40 fs-2 kernel: ata1: link is slow to respond, please be patient (ready=0) May 5 16:54:45 fs-2 kernel: ata1: device not ready (errno=-16), forcing hardreset May 5 16:54:45 fs-2 kernel: ata1: soft resetting link May 5 16:54:50 fs-2 kernel: ata1: link is slow to respond, please be patient (ready=0) May 5 16:54:55 fs-2 kernel: ata1: SRST failed (errno=-16) May 5 16:54:55 fs-2 kernel: ata1: soft resetting link May 5 16:55:00 fs-2 kernel: ata1: link is slow to respond, please be patient (ready=0) May 5 16:55:05 fs-2 kernel: ata1: SRST failed (errno=-16) May 5 16:55:05 fs-2 kernel: ata1: soft resetting link May 5 16:55:10 fs-2 kernel: ata1: link is slow to respond, please be patient (ready=0) May 5 16:55:40 fs-2 kernel: ata1: SRST failed (errno=-16) May 5 16:55:40 fs-2 kernel: ata1: limiting SATA link speed to 1.5 Gbps May 5 16:55:40 fs-2 kernel: ata1: soft resetting link May 5 16:55:45 fs-2 kernel: ata1: SRST failed (errno=-16) May 5 16:55:45 fs-2 kernel: ata1: reset failed, giving up May 5 16:55:45 fs-2 kernel: ata1: EH complete I tried a couple things to make the server find the new /dev/sda, such as rescan-scsi-bus.sh but they didn't work: [root@fs-2 ~]# echo "---" > /sys/class/scsi_host/host0/scan -bash: echo: write error: Invalid argument [root@fs-2 ~]# [root@fs-2 ~]# /root/rescan-scsi-bus.sh -l [snip] 0 new device(s) found. 0 device(s) removed. [root@fs-2 ~]# [root@fs-2 ~]# ls /dev/sda ls: /dev/sda: No such file or directory I ended up rebooting the server. /dev/sda was recognized, I fixed the software RAID, and everything is fine now. But for next time, how can I make Linux recognize a new SATA drive I have hot swapped in without rebooting? The operating system in question is RHEL5.3: [root@fs-2 ~]# cat /etc/redhat-release Red Hat Enterprise Linux Server release 5.3 (Tikanga) The hard drive is a Seagate Barracuda ES.2 SATA 3.0-Gb/s 500-GB, model ST3500320NS. Here is the lscpi output: [root@fs-2 ~]# lspci 00:00.0 RAM memory: nVidia Corporation MCP55 Memory Controller (rev a2) 00:01.0 ISA bridge: nVidia Corporation MCP55 LPC Bridge (rev a3) 00:01.1 SMBus: nVidia Corporation MCP55 SMBus (rev a3) 00:02.0 USB Controller: nVidia Corporation MCP55 USB Controller (rev a1) 00:02.1 USB Controller: nVidia Corporation MCP55 USB Controller (rev a2) 00:04.0 IDE interface: nVidia Corporation MCP55 IDE (rev a1) 00:05.0 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:05.1 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:05.2 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:06.0 PCI bridge: nVidia Corporation MCP55 PCI bridge (rev a2) 00:08.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a3) 00:09.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a3) 00:0a.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0b.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0c.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0d.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0e.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0f.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:18.0 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] HyperTransport Technology Configuration 00:18.1 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Address Map 00:18.2 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] DRAM Controller 00:18.3 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Miscellaneous Control 00:19.0 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] HyperTransport Technology Configuration 00:19.1 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Address Map 00:19.2 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] DRAM Controller 00:19.3 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Miscellaneous Control 03:00.0 VGA compatible controller: Matrox Graphics, Inc. MGA G200e [Pilot] ServerEngines (SEP1) (rev 02) 04:00.0 PCI bridge: NEC Corporation uPD720400 PCI Express - PCI/PCI-X Bridge (rev 06) 04:00.1 PCI bridge: NEC Corporation uPD720400 PCI Express - PCI/PCI-X Bridge (rev 06) Update: In perhaps a dozen cases, we've been forced to reboot servers because hot swap hasn't "just worked." Thanks for the answers to look more into the SATA controller. I've included the lspci output for the problematic system above (hostname: fs-2). I could still use some help understanding what exactly isn't supported hardware-wise in terms of hot swap for that system. Please let me know what other output besides lspci might be useful. The good news is that hot swap "just worked" today on one of our servers (hostname: www-1), which is very rare for us. Here is the lspci output: [root@www-1 ~]# lspci 00:00.0 RAM memory: nVidia Corporation MCP55 Memory Controller (rev a2) 00:01.0 ISA bridge: nVidia Corporation MCP55 LPC Bridge (rev a3) 00:01.1 SMBus: nVidia Corporation MCP55 SMBus (rev a3) 00:02.0 USB Controller: nVidia Corporation MCP55 USB Controller (rev a1) 00:02.1 USB Controller: nVidia Corporation MCP55 USB Controller (rev a2) 00:04.0 IDE interface: nVidia Corporation MCP55 IDE (rev a1) 00:05.0 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:05.1 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:05.2 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:06.0 PCI bridge: nVidia Corporation MCP55 PCI bridge (rev a2) 00:08.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a3) 00:09.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a3) 00:0b.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0c.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0f.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:18.0 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] HyperTransport Configuration 00:18.1 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Address Map 00:18.2 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] DRAM Controller 00:18.3 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Miscellaneous Control 00:18.4 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Link Control 00:19.0 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] HyperTransport Configuration 00:19.1 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Address Map 00:19.2 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] DRAM Controller 00:19.3 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Miscellaneous Control 00:19.4 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Link Control 03:00.0 VGA compatible controller: Matrox Graphics, Inc. MGA G200e [Pilot] ServerEngines (SEP1) (rev 02) 04:00.0 PCI bridge: NEC Corporation uPD720400 PCI Express - PCI/PCI-X Bridge (rev 06) 04:00.1 PCI bridge: NEC Corporation uPD720400 PCI Express - PCI/PCI-X Bridge (rev 06) 09:00.0 SCSI storage controller: LSI Logic / Symbios Logic SAS1064ET PCI-Express Fusion-MPT SAS (rev 04)

    Read the article

  • Android app hanging, sometimes until Force Close / Wait dialog appears

    - by fredley
    I'm making an app that records uncompressed (wav format) audio. I'm using this class to actually record the audio. Currently, my application records fine (I can play the file), however when I click the button to stop the recording, the app hangs for 10 seconds or so, with no log output or any signs of life. Finally it comes round, dumps a load of errors into the log, updates the UI etc. I'm using AsyncTasks to try and avoid this kind of thing but it's not working. Here's my code: //Called on clicks of the record button. rar is the instance of RehearsalAudioRecorder private OnClickListener RecordListener = new OnClickListener(){ @Override public void onClick(View v) { Log.d("Record","Click"); if (recording){ new stopRecordingTask().execute(rar,null,null); startStop.setText("Record"); statusBar.setText("Recording Finished, ready to Encode"); }else{ recording = true; new startRecordingTask().execute(rar,null,null); startStop.setText("Stop"); statusBar.setText("Recording Started"); } } }; private class startRecordingTask extends AsyncTask<RehearsalAudioRecorder,Void,Void>{ @Override protected Void doInBackground(RehearsalAudioRecorder... rs) { RehearsalAudioRecorder r = rs[0]; r.setOutputFile("/sdcard/rarOut.wav"); r.prepare(); r.start(); return null; } } private class stopRecordingTask extends AsyncTask<RehearsalAudioRecorder,Void,Void>{ @Override protected Void doInBackground(RehearsalAudioRecorder... rs) { RehearsalAudioRecorder r = rs[0]; r.stop(); r.reset(); return null; } } In Logcat, I always get output like this, which has me stumped. I have no idea what's causing it (I'm logging the RehearsalAudioRecorder class, and it's being started/stopped correctly by the button clicks. This output occurs after the log output for the button click and correct stop() method call) 12-19 11:59:11.172: ERROR/AudioRecord-JNI(22662): Unable to retrieve AudioRecord object, can't record 12-19 11:59:11.172: ERROR/uk.ac.cam.tfmw2.steg.RehearsalAudioRecorder(22662): Error occured in updateListener, recording is aborted 12-19 11:59:11.172: ERROR/uk.ac.cam.tfmw2.steg.RehearsalAudioRecorder(22662): stop() called on illegal state: STOPPED 12-19 11:59:11.172: ERROR/AudioRecord-JNI(22662): Unable to retrieve AudioRecord object, can't record 12-19 11:59:11.172: ERROR/uk.ac.cam.tfmw2.steg.RehearsalAudioRecorder(22662): Error occured in updateListener, recording is aborted 12-19 11:59:11.172: ERROR/uk.ac.cam.tfmw2.steg.RehearsalAudioRecorder(22662): stop() called on illegal state: ERROR 12-19 11:59:11.172: ERROR/AudioRecord-JNI(22662): Unable to retrieve AudioRecord object, can't record 12-19 11:59:11.172: ERROR/uk.ac.cam.tfmw2.steg.RehearsalAudioRecorder(22662): Error occured in updateListener, recording is aborted 12-19 11:59:11.172: ERROR/uk.ac.cam.tfmw2.steg.RehearsalAudioRecorder(22662): stop() called on illegal state: ERROR ... 10 or more times I've been fiddling with this all day and I'm not getting anywhere, any input would be greatly appreciated. Update I've replace the AsyncTasks with Threads, still doesn't work, the app completely hangs when I click record, despite the fact the Log indicates there's nothing going on in the main thread. Still completely stumped.

    Read the article

  • jQuery autocomplete. Doesn't reveal existing matches.

    - by Alexander
    Hello fellow engineers. I have come across a problem I just can't solve. I am using autocomplete plugin for jQuery on an input. The HTML looks something like this: <tr id="row_house" class="no-display"> <td class="col_num">4</td> <td class="col_label">House Number</td> <td class="col_data"> <input type="text" title="House Number" name="house" id="house"/> <button class="pretty_button ui-state-default ui-corner-all button-finish">Get house info</button> </td> </tr> I am sure that this is the only id="house" field. Other fields that are before this one work fine with autocomplete, and it's basically the same algorithm (other variables, other data, other calls). So why doesn't it work like it should work with the following init. code: $("#house").autocomplete(["1/4","6","6/1","6/4","8","8/1","8/5","10","10/1","10/3","10/4","12","12/1","12/5","12/6","14","14/1","15","15/1","15/2","15/4","15/5","16","16/1","16/2","16/21","16/2B","16/3","16/4","17","17/1","17/2","17/4","17/5","17/6","17/7","17/8","18","18/1","18/2","18/3","18/5","18/95","19","19/1","19/2","19/3","19/4","19/5","19/6","19/7","19/8","20","20/1","20/2","20/3","20/4","21","21/1","21/2","21/3","21/4","22","22/9","23","23/2","23/4","24","24/1","24/2","24/3","24/A","25","25/1","25/10","25/2","25/4","25/5","25/6","25/7","25/8","25/9","26","26/1","26/6","27","27/2","28","28/1","29","29/2","29/3","29/4","30","30/1","30/2","30/3","31","31/1","31/3","32/A","33","34","34/1","34/11","34/2","34/3","35","35/1","35/2","35/4","36","36/1","36/A","37","37/1","37/2","38","38/1","38/2","39/1","39/2","39/3","39/4","40","40/1","41","41/2","42","43","44","45","45/1","45/10","45/11","45/12","45/13","45/14","45/15","45/16","45/17","45/2","45/3","45/6","45/7","45/8","45/9","46","47","47/2","49","49/1","50","51","51/1","51/2","52","53","54","55/7","66","109","122","190/8","412"], {minChars:1, mustMatch:true}).result(function(event, result, formatted) { var found=false; for(var index=0; index<HChouses.length; index++) //HChouses is the same array used for init, but each entry is paired with a database ID. if(HChouses[index][0]==result) { found=true; HChouseId=HChouses[index][1]; $("#row_house .button-finish").click(function() { QueryServer("HouseConnect","FillData",true,HChouseId); //this performs an AJAX request }); break; } if(!found) $("#row_house .button-finish").unbind("click"); }); Each time I start typing (say I press the "1" button), the text appears and gets deleted instantly. Rarely at all after repeated presses I get the list (although much shorter than it should be) But if after that I press the second digit, the whole thing disappears again. P.S. I use Firefox 3.6.3 for development.

    Read the article

  • is there a man in the middle attacking to my server machine?

    - by GongT
    My server works well about half a year. But a strange thing happened (several hours before). This server has two IP-address 58.17.85.19 & 117.21.178.19 When I navigate to http://58.17.85.19, nothing different as before. But http://117.21.178.19 will return a "302 Object moved" and become a "redirect loop" I do some test: ($cmd = "wget http://117.21.178.19/?xx=$RANDOM --max-redirect 0 -S --no-cache -O -") Step by step: run $cmd on my PC and my firend's one (we live in two side of China, far away). - got 302 run $cmd on this server - got 200 OK (content is correct result of index.php) run $cmd on another server in same computer room - got 200 OK telnet from my PC and build an HTTP request (type by hand) - got 200 OK shutdown php-fpm, run $cmd on my PC - got 302 run $cmd on server - 502 Bad Gateway shutdown nginx, run $cmd on both the server and my PC - Connection refused. create iptables rule, refuse any connection to 58.17.85.19:80. run nc -l 80 -k -vvv on server and run $cmd on my PC NC show me that.... Server accept connection (Connection from [my ip]) My connection closed ! (Remove fd xx from list) wget dump out response - got 302 I know that, normaly, NC will accept connection, then dump HTTP request from client, and client will wait for response. this connection will open forever(infact client will close connection becouse timeout), becouse NC can't give any response. So... where my request gone? who send an response to the client? some virus on my server system? If so, why 58.17.85.19 didn't has this error? or... I was attacked by a middleman?

    Read the article

  • Cant correctly install Lazarus

    - by user206316
    I have a little problem with installing and running Lazarus. I just upgrade ubuntu from 13.04 to 13.10. When i had 13.04, i could install lazarus without any problems, but in 13.10 lazarus magicaly dissapeared, and when i tried install it from ubuntu software center, it said something like in my software resources lazarus-ide-0.9.30.4 doesnt exist. After some research on net i tried delete all files from earlier installations, download deb packages from sourceforge and install them, but when i want to instal fpc-src, error shows up with output: (Reading database ... 100% (Reading database ... 239063 files and directories currently installed.) Unpacking fpc-src (from .../Stiahnut/Lazarus/fpc-src.deb) ... dpkg: error processing /home/richi/Stiahnut/Lazarus/fpc-src.deb (--install): trying to overwrite '/usr/share/fpcsrc/2.6.2/rtl/nativent/tthread.inc', which is also in package fpc-source-2.6.2 2.6.2-5 dpkg-deb (subprocess): decompressing archive member: internal gzip write error: Broken pipe dpkg-deb: error: subprocess <decompress> returned error exit status 2 dpkg-deb (subprocess): cannot copy archive member from '/home/richi/Stiahnut/Lazarus/fpc-src.deb' to decompressor pipe: failed to write (Broken pipe) when i started lazarus, it of course tell me that it cant find fpc compier and fpc sources. So, please, i really need program for school and i dont wanna reinstall os anymore or something like that :( (Ubuntu 13.10 64bit) P.S: im not skilled in linux so if u know some commands to fix it just write them for copy and paste :) P.P.S:Sorry for bad English, im Slovak xD P.P.P.S: Thank so much for any answers update: output from sudo dpkg -l | grep "^rc" richi@Richi-Ubuntu:~/lazarus1.0.12$ sudo dpkg -l | grep "^rc" rc account-plugin-generic-oauth 0.10bzr13.03.26-0ubuntu1.1 amd64 GNOME Control Center account plugin for single signon - generic OAuth rc appmenu-gtk:amd64 12.10.3daily13.04.03-0ubuntu1 amd64 Export GTK menus over DBus rc appmenu-gtk3:amd64 12.10.3daily13.04.03-0ubuntu1 amd64 Export GTK menus over DBus rc fp-compiler-2.6.0 2.6.0-9 amd64 Free Pascal - compiler rc fp-utils-2.6.0 2.6.0-9 amd64 Free Pascal - utilities rc lazarus-ide-0.9.30.4 0.9.30.4-4 amd64 IDE for Free Pascal - common IDE files rc lazarus-ide-1.0.10 1.0.10+dfsg-1 amd64 IDE for Free Pascal - common IDE files rc lcl-utils-0.9.30.4 0.9.30.4-4 amd64 Lazarus Components Library - command line build tools rc lcl-utils-1.0.10 1.0.10+dfsg-1 amd64 Lazarus Components Library - command line build tools rc libbamf3-1:amd64 0.4.0daily13.06.19~13.04-0ubuntu1 amd64 Window matching library - shared library rc libboost-filesystem1.49.0 1.49.0-4 amd64 filesystem operations (portable paths, iteration over directories, etc) in C++ rc libboost-signals1.49.0 1.49.0-4 amd64 managed signals and slots library for C++ rc libboost-system1.49.0 1.49.0-4 amd64 Operating system (e.g. diagnostics support) library rc libboost-thread1.49.0 1.49.0-4 amd64 portable C++ multi-threading rc libbrlapi0.5:amd64 4.4-8ubuntu4 amd64 braille display access via BRLTTY - shared library rc libcamel-1.2-40 3.6.4-0ubuntu1.1 amd64 Evolution MIME message handling library rc libcolumbus0-0 0.4.0daily13.04.16~13.04-0ubuntu1 amd64 error tolerant matching engine - shared library rc libdns95 1:9.9.2.dfsg.P1-2ubuntu2.1 amd64 DNS Shared Library used by BIND rc libdvbpsi7 0.2.2-1 amd64 library for MPEG TS and DVB PSI tables decoding and generating rc libebackend-1.2-5 3.6.4-0ubuntu1.1 amd64 Utility library for evolution data servers rc libedata-book-1.2-15 3.6.4-0ubuntu1.1 amd64 Backend library for evolution address books rc libedata-cal-1.2-18 3.6.4-0ubuntu1.1 amd64 Backend library for evolution calendars rc libgc1c3:amd64 1:7.2d-0ubuntu5 amd64 conservative garbage collector for C and C++ rc libgd2-xpm:amd64 2.0.36~rc1~dfsg-6.1ubuntu1 amd64 GD Graphics Library version 2 rc libgd2-xpm:i386 2.0.36~rc1~dfsg-6.1ubuntu1 i386 GD Graphics Library version 2 rc libgnome-desktop-3-4 3.6.3-0ubuntu1 amd64 Utility library for loading .desktop files - runtime files rc libgphoto2-2:amd64 2.4.14-2 amd64 gphoto2 digital camera library rc libgphoto2-2:i386 2.4.14-2 i386 gphoto2 digital camera library rc libgphoto2-port0:amd64 2.4.14-2 amd64 gphoto2 digital camera port library rc libgphoto2-port0:i386 2.4.14-2 i386 gphoto2 digital camera port library rc libgtksourceview-3.0-0:amd64 3.6.3-0ubuntu1 amd64 shared libraries for the GTK+ syntax highlighting widget rc libgweather-3-1 3.6.2-0ubuntu1 amd64 GWeather shared library rc libharfbuzz0:amd64 0.9.13-1 amd64 OpenType text shaping engine rc libibus-1.0-0:amd64 1.4.2-0ubuntu2 amd64 Intelligent Input Bus - shared library rc libical0 0.48-2 amd64 iCalendar library implementation in C (runtime) rc libimobiledevice3 1.1.4-1ubuntu6.2 amd64 Library for communicating with the iPhone and iPod Touch rc libisc92 1:9.9.2.dfsg.P1-2ubuntu2.1 amd64 ISC Shared Library used by BIND rc libkms1:amd64 2.4.46-1 amd64 Userspace interface to kernel DRM buffer management rc libllvm3.2:i386 1:3.2repack-7ubuntu1 i386 Low-Level Virtual Machine (LLVM), runtime library rc libmikmod2:amd64 3.1.12-5 amd64 Portable sound library rc libpackagekit-glib2-14:amd64 0.7.6-3ubuntu1 amd64 Library for accessing PackageKit using GLib rc libpoppler28:amd64 0.20.5-1ubuntu3 amd64 PDF rendering library rc libraw5:amd64 0.14.7-0ubuntu1.13.04.2 amd64 raw image decoder library rc librhythmbox-core6 2.98-0ubuntu5 amd64 support library for the rhythmbox music player rc libsdl-mixer1.2:amd64 1.2.12-7ubuntu1 amd64 Mixer library for Simple DirectMedia Layer 1.2, libraries rc libsnmp15 5.4.3~dfsg-2.7ubuntu1 amd64 SNMP (Simple Network Management Protocol) library rc libsyncdaemon-1.0-1 4.2.0-0ubuntu1 amd64 Ubuntu One synchronization daemon library rc libunity-core-6.0-5 7.0.0daily13.06.19~13.04-0ubuntu1 amd64 Core library for the Unity interface. rc libusb-0.1-4:i386 2:0.1.12-23.2ubuntu1 i386 userspace USB programming library rc libwayland0:amd64 1.0.5-0ubuntu1 amd64 wayland compositor infrastructure - shared libraries rc linux-image-3.8.0-19-generic 3.8.0-19.30 amd64 Linux kernel image for version 3.8.0 on 64 bit x86 SMP rc linux-image-3.8.0-31-generic 3.8.0-31.46 amd64 Linux kernel image for version 3.8.0 on 64 bit x86 SMP rc linux-image-extra-3.8.0-19-generic 3.8.0-19.30 amd64 Linux kernel image for version 3.8.0 on 64 bit x86 SMP rc linux-image-extra-3.8.0-31-generic 3.8.0-31.46 amd64 Linux kernel image for version 3.8.0 on 64 bit x86 SMP rc screen-resolution-extra 0.15ubuntu1 all Extension for the GNOME screen resolution applet rc unity-common 7.0.0daily13.06.19~13.04-0ubuntu1 all Common files for the Unity interface.

    Read the article

  • SQl rows to columns conversion

    - by Thihara
    Hi, I have a table ClassAttendance and I'm using MSSQL 2005 studentID--attendanceDate---------------------------------------status 1004--------2010-03-17--------------------------------------------------0 1005--------2010-03-17--------------------------------------------------1 1006--------2010-03-17--------------------------------------------------0 1007--------2010-03-17--------------------------------------------------0 1004--------2010-03-19--------------------------------------------------0 1005--------2010-03-19--------------------------------------------------1 1006--------2010-03-19--------------------------------------------------0 1007--------2010-03-19--------------------------------------------------0 1004--------2010-03-20--------------------------------------------------1 as you can see studentID is a foreign Key for a table called StudentData and attendedDate has an unknown number of rows. Can i get the output like below by using a query? I need the dates in one month to be columns and the value of the date columns will be values in the status column. The number of date records per studentID is the same its the number of dates in the attendanceDate filed that is unknown. studentID---------2010-03-17--------2010-03-19------2010-03-20 1004-----------------------------0----------------------0--------------------1 etc. This is for a creating a report so I need to do it in a query. Please help if you can.

    Read the article

  • Grub won't boot windows after update from 11.10 to 12.04

    - by Holger
    thanks for your time and reading this, here's the deal: i upgraded from 11.10 to 12.04 and everything worked out until i rebooted, i had 11.10 sucessfully running as a dual boot with windows vista. when i rebooted, my GRUB was shot to hell, what ever option i selected it said partion not found or something similar... booting into a live version on a thumb drive and running bootrepair from there fixed the issue... but only for ubuntu, when i try to boot into windows it only goes back to GRUB. i'm not at home, and heres a list of what i have here with me... 1 4gb thumb drive, empty 1 8gb thumb drive, windows vista installer bootable 1 old laptop, the one i try to save, optical drive is not existent 2 Mbps internet connection can you help me get back into my windows without having to reinstall windows? or at least show me a way how to use my illustrator through a virtual machine or something? here's my grub cfg # # DO NOT EDIT THIS FILE # # It is automatically generated by grub-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s $prefix/grubenv ]; then set have_grubenv=true load_env fi set default="0" if [ "${prev_saved_entry}" ]; then set saved_entry="${prev_saved_entry}" save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z "${boot_once}" ]; then saved_entry="${chosen}" save_env saved_entry fi } function recordfail { set recordfail=1 if [ -n "${have_grubenv}" ]; then if [ -z "${boot_once}" ]; then save_env recordfail; fi; fi } function load_video { insmod vbe insmod vga insmod video_bochs insmod video_cirrus } insmod part_msdos insmod ext2 set root='(hd0,msdos2)' search --no-floppy --fs-uuid --set=root 1063e402-b14f-45e5-92b6-d20a2e3a717e if loadfont /usr/share/grub/unicode.pf2 ; then set gfxmode=auto load_video insmod gfxterm insmod part_msdos insmod ext2 set root='(hd0,msdos2)' search --no-floppy --fs-uuid --set=root 1063e402-b14f-45e5-92b6-d20a2e3a717e set locale_dir=($root)/boot/grub/locale set lang=de_DE insmod gettext fi terminal_output gfxterm if [ "${recordfail}" = 1 ]; then set timeout=-1 else set timeout=10 fi ### END /etc/grub.d/00_header ### ### BEGIN /etc/grub.d/05_debian_theme ### set menu_color_normal=white/black set menu_color_highlight=black/light-gray if background_color 44,0,30; then clear fi ### END /etc/grub.d/05_debian_theme ### ### BEGIN /etc/grub.d/10_linux ### function gfxmode { set gfxpayload="${1}" if [ "${1}" = "keep" ]; then set vt_handoff=vt.handoff=7 else set vt_handoff= fi } if [ "${recordfail}" != 1 ]; then if [ -e ${prefix}/gfxblacklist.txt ]; then if hwmatch ${prefix}/gfxblacklist.txt 3; then if [ ${match} = 0 ]; then set linux_gfx_mode=keep else set linux_gfx_mode=text fi else set linux_gfx_mode=text fi else set linux_gfx_mode=keep fi else set linux_gfx_mode=text fi export linux_gfx_mode if [ "${linux_gfx_mode}" != "text" ]; then load_video; fi menuentry 'Ubuntu, mit Linux 3.2.0-24-generic' --class ubuntu --class gnu-linux --class gnu --class os { recordfail gfxmode $linux_gfx_mode insmod gzio insmod part_msdos insmod ext2 set root='(hd0,msdos2)' search --no-floppy --fs-uuid --set=root 1063e402-b14f-45e5-92b6-d20a2e3a717e linux /boot/vmlinuz-3.2.0-24-generic root=UUID=1063e402-b14f-45e5-92b6-d20a2e3a717e ro quiet splash $vt_handoff initrd /boot/initrd.img-3.2.0-24-generic } menuentry 'Ubuntu, mit Linux 3.2.0-24-generic (Wiederherstellungsmodus)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod gzio insmod part_msdos insmod ext2 set root='(hd0,msdos2)' search --no-floppy --fs-uuid --set=root 1063e402-b14f-45e5-92b6-d20a2e3a717e echo 'Linux 3.2.0-24-generic wird geladen …' linux /boot/vmlinuz-3.2.0-24-generic root=UUID=1063e402-b14f-45e5-92b6-d20a2e3a717e ro recovery nomodeset echo 'Initiale Ramdisk wird geladen …' initrd /boot/initrd.img-3.2.0-24-generic } submenu "Previous Linux versions" { menuentry 'Ubuntu, mit Linux 3.0.0-19-generic' --class ubuntu --class gnu-linux --class gnu --class os { recordfail gfxmode $linux_gfx_mode insmod gzio insmod part_msdos insmod ext2 set root='(hd0,msdos2)' search --no-floppy --fs-uuid --set=root 1063e402-b14f-45e5-92b6-d20a2e3a717e linux /boot/vmlinuz-3.0.0-19-generic root=UUID=1063e402-b14f-45e5-92b6-d20a2e3a717e ro quiet splash $vt_handoff initrd /boot/initrd.img-3.0.0-19-generic } menuentry 'Ubuntu, mit Linux 3.0.0-19-generic (Wiederherstellungsmodus)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod gzio insmod part_msdos insmod ext2 set root='(hd0,msdos2)' search --no-floppy --fs-uuid --set=root 1063e402-b14f-45e5-92b6-d20a2e3a717e echo 'Linux 3.0.0-19-generic wird geladen …' linux /boot/vmlinuz-3.0.0-19-generic root=UUID=1063e402-b14f-45e5-92b6-d20a2e3a717e ro recovery nomodeset echo 'Initiale Ramdisk wird geladen …' initrd /boot/initrd.img-3.0.0-19-generic } } ### END /etc/grub.d/10_linux ### ### BEGIN /etc/grub.d/20_linux_xen ### ### END /etc/grub.d/20_linux_xen ### ### BEGIN /etc/grub.d/20_memtest86+ ### menuentry "Memory test (memtest86+)" { insmod part_msdos insmod ext2 set root='(hd0,msdos2)' search --no-floppy --fs-uuid --set=root 1063e402-b14f-45e5-92b6-d20a2e3a717e linux16 /boot/memtest86+.bin } menuentry "Memory test (memtest86+, serial console 115200)" { insmod part_msdos insmod ext2 set root='(hd0,msdos2)' search --no-floppy --fs-uuid --set=root 1063e402-b14f-45e5-92b6-d20a2e3a717e linux16 /boot/memtest86+.bin console=ttyS0,115200n8 } ### END /etc/grub.d/20_memtest86+ ### ### BEGIN /etc/grub.d/30_os-prober ### menuentry "Windows Vista (loader) (on /dev/sda1)" --class windows --class os { insmod part_msdos insmod ntfs set root='(hd0,msdos1)' search --no-floppy --fs-uuid --set=root 2C9E66B39E6674EC chainloader +1 } ### END /etc/grub.d/30_os-prober ### ### BEGIN /etc/grub.d/40_custom ### # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. ### END /etc/grub.d/40_custom ### ### BEGIN /etc/grub.d/41_custom ### if [ -f $prefix/custom.cfg ]; then source $prefix/custom.cfg; fi ### END /etc/grub.d/41_custom ###

    Read the article

  • In python, is there anyway to have a variable be a different random number everytime?

    - by woah113
    Basically I have this: import random variable1 = random.randint(13, 19) And basically what that does is assign variable1 a random number between 13 and 19. Great. But, what I want it to do is assign a different random number between 13 and 19 to that variable every time it is called. Is there anyway I can do this? If I'm not being clear enough here's an example: import random variable1 = random.randint(13, 19) print(variable1) print(variable1) print(variable1) And the output I want would look something like this: ./script.py 15 19 13 So yeah, anyway I could do this in python? (More specifically python3. but the answer would probably be similar to a python2 answer)

    Read the article

  • One National Team One Event &ndash; SharePoint Saturday Kansas City

    - by MOSSLover
    I wasn’t expect to run an event from 1,000 miles away, but some stuff happened you know like it does and I opted in.  It was really weird, because people asked why are you living in NJ and running Kansas City?  I did move, but it was like my baby and Karthik didn’t have the ability to do it this year.  I found it really challenging, because I could not physically be in Kansas City.  At first I was freaking out and Lee Brandt, Brian Laird, and Chris Geier offered to help.  Somehow I couldn’t come the day of the event.  Time-wise it just didn’t work out.  I could do all the leg work prior to the event, but weekends just were not good.  I was going to be in DC until March or April on the weekdays, so leaving that weekend was too tough.  As it worked out Lee was my eyes and ears for the venue.  Brian was the sponsor and prize box coordinator if anyone needed to send items.  Lee also helped Brian the day of the event move all the boxes.  I did everything we could do electronically, such as get the sponsors coordinate with Michael Lotter on invoicing and getting the speakers, posting the submissions, budgeting the money, setting up a speaker dinner by phone, plus all that other stuff you do behind the scenes.  Chris was there to help Lee and Brian the day of the event and help us out with the speaker dinner.  Karthik finally got back from India and he was there the night before getting the folders together and the signs and stuffing it all.  Jason Gallicchio also helped me out (my cohort for SPS NYC) as he did the schedule and helped with posting the speakers abstracts and so did Chris Geier by posting the bios.  The lot of them enlisted a few other monkeys to help out.  It was the weirdest thing I’ve ever seen, but it worked.  Around 100+ attendees ended up showing and I hear it was  a great event.  Jason, Michael, Chris, Karthik, Brian, and Lee are not all from the same area, but they helped me out in bringing this event together.  It was a national SharePoint Saturday team that brought together a specific local event for Kansas City.  It’s like a metaphor for the entire SharePoint Community.  We help our own kind out we don’t let me fail.  I know Lee and Brian aren’t technically SharePoint People they are honorary SharePoint Community Members.  Thanks everyone for the support and help in bringing this event together.  Technorati Tags: SharePoint Saturday,SPS KC,SharePoint,SharePoint Saturday Kanas City,Kansas City

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >