Search Results

Search found 46833 results on 1874 pages for 'distributed system'.

Page 1801/1874 | < Previous Page | 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808  | Next Page >

  • What's up with LDoms: Part 2 - Creating a first, simple guest

    - by Stefan Hinker
    Welcome back! In the first part, we discussed the basic concepts of LDoms and how to configure a simple control domain.  We saw how resources were put aside for guest systems and what infrastructure we need for them.  With that, we are now ready to create a first, very simple guest domain.  In this first example, we'll keep things very simple.  Later on, we'll have a detailed look at things like sizing, IO redundancy, other types of IO as well as security. For now,let's start with this very simple guest.  It'll have one core's worth of CPU, one crypto unit, 8GB of RAM, a single boot disk and one network port.  CPU and RAM are easy.  The network port we'll create by attaching a virtual network port to the vswitch we created in the primary domain.  This is very much like plugging a cable into a computer system on one end and a network switch on the other.  For the boot disk, we'll need two things: A physical piece of storage to hold the data - this is called the backend device in LDoms speak.  And then a mapping between that storage and the guest domain, giving it access to that virtual disk.  For this example, we'll use a ZFS volume for the backend.  We'll discuss what other options there are for this and how to chose the right one in a later article.  Here we go: root@sun # ldm create mars root@sun # ldm set-vcpu 8 mars root@sun # ldm set-mau 1 mars root@sun # ldm set-memory 8g mars root@sun # zfs create rpool/guests root@sun # zfs create -V 32g rpool/guests/mars.bootdisk root@sun # ldm add-vdsdev /dev/zvol/dsk/rpool/guests/mars.bootdisk \ mars.root@primary-vds root@sun # ldm add-vdisk root mars.root@primary-vds mars root@sun # ldm add-vnet net0 switch-primary mars That's all, mars is now ready to power on.  There are just three commands between us and the OK prompt of mars:  We have to "bind" the domain, start it and connect to its console.  Binding is the process where the hypervisor actually puts all the pieces that we've configured together.  If we made a mistake, binding is where we'll be told (starting in version 2.1, a lot of sanity checking has been put into the config commands themselves, but binding will catch everything else).  Once bound, we can start (and of course later stop) the domain, which will trigger the boot process of OBP.  By default, the domain will then try to boot right away.  If we don't want that, we can set "auto-boot?" to false.  Finally, we'll use telnet to connect to the console of our newly created guest.  The output of "ldm list" shows us what port has been assigned to mars.  By default, the console service only listens on the loopback interface, so using telnet is not a large security concern here. root@sun # ldm set-variable auto-boot\?=false mars root@sun # ldm bind mars root@sun # ldm start mars root@sun # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv- UART 8 7680M 0.5% 1d 4h 30m mars active -t---- 5000 8 8G 12% 1s root@sun # telnet localhost 5000 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. ~Connecting to console "mars" in group "mars" .... Press ~? for control options .. {0} ok banner SPARC T3-4, No Keyboard Copyright (c) 1998, 2011, Oracle and/or its affiliates. All rights reserved. OpenBoot 4.33.1, 8192 MB memory available, Serial # 87203131. Ethernet address 0:21:28:24:1b:50, Host ID: 85241b50. {0} ok We're done, mars is ready to install Solaris, preferably using AI, of course ;-)  But before we do that, let's have a little look at the OBP environment to see how our virtual devices show up here: {0} ok printenv auto-boot? auto-boot? = false {0} ok printenv boot-device boot-device = disk net {0} ok devalias root /virtual-devices@100/channel-devices@200/disk@0 net0 /virtual-devices@100/channel-devices@200/network@0 net /virtual-devices@100/channel-devices@200/network@0 disk /virtual-devices@100/channel-devices@200/disk@0 virtual-console /virtual-devices/console@1 name aliases We can see that setting the OBP variable "auto-boot?" to false with the ldm command worked.  Of course, we'd normally set this to "true" to allow Solaris to boot right away once the LDom guest is started.  The setting for "boot-device" is the default "disk net", which means OBP would try to boot off the devices pointed to by the aliases "disk" and "net" in that order, which usually means "disk" once Solaris is installed on the disk image.  The actual devices these aliases point to are shown with the command "devalias".  Here, we have one line for both "disk" and "net".  The device paths speak for themselves.  Note that each of these devices has a second alias: "net0" for the network device and "root" for the disk device.  These are the very same names we've given these devices in the control domain with the commands "ldm add-vnet" and "ldm add-vdisk".  Remember this, as it is very useful once you have several dozen disk devices... To wrap this up, in this part we've created a simple guest domain, complete with CPU, memory, boot disk and network connectivity.  This should be enough to get you going.  I will cover all the more advanced features and a little more theoretical background in several follow-on articles.  For some background reading, I'd recommend the following links: LDoms 2.2 Admin Guide: Setting up Guest Domains Virtual Console Server: vntsd manpage - This includes the control sequences and commands available to control the console session. OpenBoot 4.x command reference - All the things you can do at the ok prompt

    Read the article

  • JSF Navigation issue Facelets and Beans

    - by ortho
    Hi, I have an issue with navigation in my simple jsf system. I have MainBean that has two methods: public String register() and public String login(). I have played with faces-config.xml for several hours and I feel like I miss something very important because I think like I have tried all simple solutions so far :). I have added the mysql connector (jdbc) to Tomcat's lib folder and I am able to register the table to mySQL databse. It even allows my users to log in to the page. The only problem is that I can't use navigation in any other page than login.xhtml. Seems like navigation is only active on this one. I tried to use * but is no joy. I am sure that there is a simple fix for that and someone will come up with correct solution soon. Let's skip all the mysql part and try to fix the navigation issue please. faces-config.xml: http://java.sun.com/xml/ns/javaee/web-facesconfig_1_2.xsd" version="1.2" com.sun.facelets.FaceletViewHandler mainBean dk.itu.beans.MainBean session registerBean dk.itu.beans.RegisterBean session /login.xhtml success /welcome.xhtml failure /login_failed.xhtml sign /register.xhtml /login_failed.xhtml back /login.xhtml Here the last navigation-rule from login_failed.xhtml does not work at all. login.xhtml (which is the main - starting view): login_failed.xhtml: Here I tried many options. I used action="#{mainBean.register})" method that returns "sign" string and none of these worked. There is one more file (not specified in faces-config.xml file, because did not work either - but going from login.xhtml by button works fine. I tried to manage navigation from login_failed.xhtml first, then I will apply the same rool for registration to come back to login page when customer registers his nick name). register.xhtml: Basicaly mainBean.register now calls the database and returns string, but obviously it doesn't navigate to any view (but gives an entry to database). I believe it is simple fix for most of the experienced web developers and any help will be greately appreciated. I use eclipse, tomcat 6 and Widows Vista if that helps :) Thank you in advance. Kindest regards.

    Read the article

  • Connecting to MSSQL 2008 from Java

    - by Xinus
    I am trying to connect to MSSQL 2008 server from Java here is a program import java.sql.*; public class connectURL { public static void main(String[] args) { // Create a variable for the connection string. String connectionUrl = "jdbc:sqlserver://localhost/SQLEXPRESS/Databases/HelloWorld:1433;";// + //"databaseName=HelloWorld;integratedSecurity=true;"; // Declare the JDBC objects. Connection con = null; Statement stmt = null; ResultSet rs = null; try { // Establish the connection. Class.forName("com.microsoft.sqlserver.jdbc.SQLServerDriver"); con = DriverManager.getConnection(connectionUrl); // Create and execute an SQL statement that returns some data. String SQL = "SELECT TOP 10 * FROM Person.Contact"; stmt = con.createStatement(); rs = stmt.executeQuery(SQL); // Iterate through the data in the result set and display it. while (rs.next()) { System.out.println(rs.getString(4) + " " + rs.getString(6)); } } // Handle any errors that may have occurred. catch (Exception e) { e.printStackTrace(); } finally { if (rs != null) try { rs.close(); } catch(Exception e) {} if (stmt != null) try { stmt.close(); } catch(Exception e) {} if (con != null) try { con.close(); } catch(Exception e) {} } } } But it shows error as com.microsoft.sqlserver.jdbc.SQLServerException: The TCP/IP connection to the host localhost/SQLEXPRESS/Databases/HelloWorld, port 1433 has failed. Error: "null. Verify the connection properties, check that an instance of SQL Server is running on the host and accepting TCP/IP connections at the port, and that no firewall is blocking TCP connections to the port.". at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDriverError(SQLServerException.java:170) at com.microsoft.sqlserver.jdbc.SQLServerConnection.connectHelper(SQLServerConnection.java:1049) at com.microsoft.sqlserver.jdbc.SQLServerConnection.login(SQLServerConnection.java:833) at com.microsoft.sqlserver.jdbc.SQLServerConnection.connect(SQLServerConnection.java:716) at com.microsoft.sqlserver.jdbc.SQLServerDriver.connect(SQLServerDriver.java:841) at java.sql.DriverManager.getConnection(Unknown Source) at java.sql.DriverManager.getConnection(Unknown Source) at connectURL.main(connectURL.java:43) I have given followed all the instructions as given in http://teamtutorials.com/database-tutorials/configuring-and-creating-a-database-in-ms-sql-2008 What can be the problem ?

    Read the article

  • Problem to display a pdf from my JSF Portlet of Liferay

    - by Stefano
    I use liferay 5.2 with jsf-portlet. From the page I want to press a button to generate one PDF. In managedbean i build pdf and I want to show it in response. In a ByteArrayOutputStream named outputStream i have my pdf built with JasperReport. I write: PortletResponse portletResponse = (PortletResponse)externalCtx.getResponse(); HttpServletResponse httpResponse = PortalUtil.getHttpServletResponse(portletResponse); ServletOutputStream out = httpResponse.getOutputStream(); String filename="Pdf" + System.currentTimeMillis()+".pdf"; httpResponse.reset(); httpResponse.setContentType("application/pdf"); httpResponse.setHeader("Content-Disposition", "attachment; filename=\""+ filename + "\""); httpResponse.setContentLength(outputStream.size()); outputStream.writeTo(out); out.flush(); out.close(); I do not see anything output! In jboss log i read: IllegaStateException.... What is wrong? LOG 11:03:19,716 INFO [STDOUT] 11:03:19,716 ERROR [IncludeTag] Current URL /web/organo-di-governo/datawarehouse?p_p_id=1_WAR_Portlet_Datawarehouse_INSTANCE_D7s7&p_p_lifecycle=1&p_p_state=normal&p_p_mode=view&p_p_col_id=column-2&p_p_col_count=1&_1_WAR_Portlet_Datawarehouse_INSTANCE_D7s7_com.sun.faces.portlet.VIEW_ID=%2Fview.xhtml&_1_WAR_Portlet_Datawarehouse_INSTANCE_D7s7_com.sun.faces.portlet.NAME_SPACE=_1_WAR_Portlet_Datawarehouse_INSTANCE_D7s7_ generates exception: null 11:03:19,717 INFO [STDOUT] 11:03:19,717 ERROR [IncludeTag] java.lang.IllegalStateException at com.liferay.portal.servlet.filters.strip.StripResponse.getWriter(StripResponse.java:85) at org.apache.jasper.runtime.JspWriterImpl.initOut(JspWriterImpl.java:125) at org.apache.jasper.runtime.JspWriterImpl.flushBuffer(JspWriterImpl.java:118) at org.apache.jasper.runtime.JspWriterImpl.write(JspWriterImpl.java:326) at org.apache.jasper.runtime.JspWriterImpl.write(JspWriterImpl.java:342) at org.apache.jasper.runtime.JspWriterImpl.print(JspWriterImpl.java:468) at com.liferay.taglib.util.ThemeUtil.includeVM(ThemeUtil.java:208) at com.liferay.taglib.util.ThemeUtil.include(ThemeUtil.java:68) at com.liferay.taglib.util.IncludeTag.doEndTag(IncludeTag.java:59) at org.apache.jsp.html.common.themes.portal_jsp._jspx_meth_liferay_002dtheme_005finclude_005f1(portal_jsp.java:816) at org.apache.jsp.html.common.themes.portal_jsp._jspx_meth_c_005fotherwise_005f0(portal_jsp.java:788) at org.apache.jsp.html.common.themes.portal_jsp._jspService(portal_jsp.java:724) at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:70) at javax.servlet.http.HttpServlet.service(HttpServlet.java:803) at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:373) at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:336) at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:265) at javax.servlet.http.HttpServlet.service(HttpServlet.java:803) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) 11:03:19,718 ERROR [[jsp]] Servlet.service() for servlet jsp threw exception java.lang.IllegalStateException 11:03:19,719 ERROR [[Main Servlet]] Servlet.service() for servlet Main Servlet threw exception java.lang.IllegalStateException at com.liferay.portal.servlet.filters.strip.StripResponse.getWriter(StripResponse.java:85) at org.apache.jasper.runtime.JspWriterImpl.initOut(JspWriterImpl.java:125) at org.apache.jasper.runtime.JspWriterImpl.flushBuffer(JspWriterImpl.java:118) 11:03:19,722 INFO [STDOUT] 11:03:19,720 ERROR [OpenSSOFilter] org.apache.jasper.JasperException: java.lang.IllegalStateException org.apache.jasper.JasperException: java.lang.IllegalStateException at org.apache.jasper.servlet.JspServletWrapper.handleJspException(JspServletWrapper.java:521) at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:409) at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:336) at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:265) 11:03:19,722 INFO [STDOUT] n.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.jboss.web.tomcat.filters.ReplyHeaderFilter.doFilter(ReplyHeaderFilter.java:96) Caused by: java.lang.IllegalStateException at com.liferay.portal.servlet.filters.strip.StripResponse.getWriter(StripResponse.java:85) at org.apache.jasper.runtime.JspWriterImpl.initOut(JspWriterImpl.java:125) at org.apache.jasper.runtime.JspWriterImpl.flushBuffer(JspWriterImpl.java:118)

    Read the article

  • How can I close a Window using the OS-X ScriptingBridge framework, from Perl?

    - by Gavin Brock
    Problem... Since MacPerl is no longer supported on 64bit perl, I am trying alternative frameworks to control Terminal.app. I am trying the ScriptingBridge, but have run into a problem passing an enumerated string to the closeSaving method using the PerlObjCBridge. I want to call: typedef enum { TerminalSaveOptionsYes = 'yes ' /* Save the file. */, TerminalSaveOptionsNo = 'no ' /* Do not save the file. */, TerminalSaveOptionsAsk = 'ask ' /* Ask the user whether or not to save the file. */ } TerminalSaveOptions; - (void) closeSaving:(TerminalSaveOptions)saving savingIn:(NSURL *)savingIn; // Close a document. Attempted Solution... I have tried: #!/usr/bin/perl use strict; use warnings; use Foundation; # Load the ScriptingBridge framework NSBundle->bundleWithPath_('/System/Library/Frameworks/ScriptingBridge.framework')->load; @SBApplication::ISA = qw(PerlObjCBridge); # Set up scripting bridge for Terminal.app my $terminal = SBApplication->applicationWithBundleIdentifier_("com.apple.terminal"); # Open a new window, get back the tab my $tab = $terminal->doScript_in_('exec sleep 60', undef); warn "Opened tty: ".$tab->tty->UTF8String; # Yes, it is a tab # Now try to close it # Simple idea eval { $tab->closeSaving_savingIn_('no ', undef) }; warn $@ if $@; # Try passing a string ref my $no = 'no '; eval { $tab->closeSaving_savingIn_(\$no, undef) }; warn $@ if $@; # Ok - get a pointer to the string my $pointer = pack("P4", $no); eval { $tab->closeSaving_savingIn_($pointer, undef) }; warn $@ if $@; eval { $tab->closeSaving_savingIn_(\$pointer, undef) }; warn $@ if $@; # Try a pointer decodes as an int, like PerlObjCBridge uses my $int_pointer = unpack("L!", $pointer); eval { $tab->closeSaving_savingIn_($int_pointer, undef) }; warn $@ if $@; eval { $tab->closeSaving_savingIn_(\$int_pointer, undef) }; warn $@ if $@; # Aaarrgghhhh.... As you can see, all my guesses at how to pass the enumerated string fail. Before you flame me... I know that I could use another language (ruby, python, cocoa) to do this but that would require translating the rest of the code. I might be able to use CamelBones, but I don't want to assume my users have it installed. I could also use the NSAppleScript framework (assuming I went to the trouble of finding the Tab and Window IDs) but it seems odd to have to resort to it for just this one call.

    Read the article

  • Try a sample: Using the counter predicate for event sampling

    - by extended_events
    Extended Events offers a rich filtering mechanism, called predicates, that allows you to reduce the number of events you collect by specifying criteria that will be applied during event collection. (You can find more information about predicates in Using SQL Server 2008 Extended Events (by Jonathan Kehayias)) By evaluating predicates early in the event firing sequence we can reduce the performance impact of collecting events by stopping event collection when the criteria are not met. You can specify predicates on both event fields and on a special object called a predicate source. Predicate sources are similar to action in that they typically are related to some type of global information available from the server. You will find that many of the actions available in Extended Events have equivalent predicate sources, but actions and predicates sources are not the same thing. Applying predicates, whether on a field or predicate source, is very similar to what you are used to in T-SQL in terms of how they work; you pick some field/source and compare it to a value, for example, session_id = 52. There is one predicate source that merits special attention though, not just for its special use, but for how the order of predicate evaluation impacts the behavior you see. I’m referring to the counter predicate source. The counter predicate source gives you a way to sample a subset of events that otherwise meet the criteria of the predicate; for example you could collect every other event, or only every tenth event. Simple CountingThe counter predicate source works by creating an in memory counter that increments every time the predicate statement is evaluated. Here is a simple example with my favorite event, sql_statement_completed, that only collects the second statement that is run. (OK, that’s not much of a sample, but this is for demonstration purposes. Here is the session definition: CREATE EVENT SESSION counter_test ON SERVERADD EVENT sqlserver.sql_statement_completed    (ACTION (sqlserver.sql_text)    WHERE package0.counter = 2)ADD TARGET package0.ring_bufferWITH (MAX_DISPATCH_LATENCY = 1 SECONDS) You can find general information about the session DDL syntax in BOL and from Pedro’s post Introduction to Extended Events. The important part here is the WHERE statement that defines that I only what the event where package0.count = 2; in other words, only the second instance of the event. Notice that I need to provide the package name along with the predicate source. You don’t need to provide the package name if you’re using event fields, only for predicate sources. Let’s say I run the following test queries: -- Run three statements to test the sessionSELECT 'This is the first statement'GOSELECT 'This is the second statement'GOSELECT 'This is the third statement';GO Once you return the event data from the ring buffer and parse the XML (see my earlier post on reading event data) you should see something like this: event_name sql_text sql_statement_completed SELECT ‘This is the second statement’ You can see that only the second statement from the test was actually collected. (Feel free to try this yourself. Check out what happens if you remove the WHERE statement from your session. Go ahead, I’ll wait.) Percentage Sampling OK, so that wasn’t particularly interesting, but you can probably see that this could be interesting, for example, lets say I need a 25% sample of the statements executed on my server for some type of QA analysis, that might be more interesting than just the second statement. All comparisons of predicates are handled using an object called a predicate comparator; the simple comparisons such as equals, greater than, etc. are mapped to the common mathematical symbols you know and love (eg. = and >), but to do the less common comparisons you will need to use the predicate comparators directly. You would probably look to the MOD operation to do this type sampling; we would too, but we don’t call it MOD, we call it divides_by_uint64. This comparator evaluates whether one number is divisible by another with no remainder. The general syntax for using a predicate comparator is pred_comp(field, value), field is always first and value is always second. So lets take a look at how the session changes to answer our new question of 25% sampling: CREATE EVENT SESSION counter_test_25 ON SERVERADD EVENT sqlserver.sql_statement_completed    (ACTION (sqlserver.sql_text)    WHERE package0.divides_by_uint64(package0.counter,4))ADD TARGET package0.ring_bufferWITH (MAX_DISPATCH_LATENCY = 1 SECONDS)GO Here I’ve replaced the simple equivalency check with the divides_by_uint64 comparator to check if the counter is evenly divisible by 4, which gives us back every fourth record. I’ll leave it as an exercise for the reader to test this session. Why order matters I indicated at the start of this post that order matters when it comes to the counter predicate – it does. Like most other predicate systems, Extended Events evaluates the predicate statement from left to right; as soon as the predicate statement is proven false we abandon evaluation of the remainder of the statement. The counter predicate source is only incremented when it is evaluated so whether or not the counter is incremented will depend on where it is in the predicate statement and whether a previous criteria made the predicate false or not. Here is a generic example: Pred1: (WHERE statement_1 AND package0.counter = 2)Pred2: (WHERE package0.counter = 2 AND statement_1) Let’s say I cause a number of events as follows and examine what happens to the counter predicate source. Iteration Statement Pred1 Counter Pred2 Counter A Not statement_1 0 1 B statement_1 1 2 C Not statement_1 1 3 D statement_1 2 4 As you can see, in the case of Pred1, statement_1 is evaluated first, when it fails (A & C) predicate evaluation is stopped and the counter is not incremented. With Pred2 the counter is evaluated first, so it is incremented on every iteration of the event and the remaining parts of the predicate are then evaluated. In this example, Pred1 would return an event for D while Pred2 would return an event for B. But wait, there is an interesting side-effect here; consider Pred2 if I had run my statements in the following order: Not statement_1 Not statement_1 statement_1 statement_1 In this case I would never get an event back from the system because the point at which counter=2, the rest of the predicate evaluates as false so the event is not returned. If you’re using the counter target for sampling and you’re not getting the expected events, or any events, check the order of the predicate criteria. As a general rule I’d suggest that the counter criteria should be the last element of your predicate statement since that will assure that your sampling rate will apply to the set of event records defined by the rest of your predicate. Aside: I’m interested in hearing about uses for putting the counter predicate criteria earlier in the predicate statement. If you have one, post it in a comment to share with the class. - Mike Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Beginner Geek: How to Use Bookmarklets on Any Device

    - by Chris Hoffman
    Web browser bookmarklets allow you to perform actions on the current page with just a click or tap. They’re a lightweight alternative to browser extensions. They even work on mobile browsers that don’t support traditional extensions. To use bookmarklets, all you need is a web browser that supports bookmarks — that’s it! Bookmarklets Explained Web pages you view in your browser use JavaScript code. That’s why web pages aren’t just static documents anymore — they’re dynamic. A bookmarklet is a normal bookmark with a piece of JavaScript code instead of a web address. When you click or tap the bookmarklet, it will execute the JavaScript code on the current page instead of loading a different page, as most bookmarks do. Bookmarklets can be used to do something to a web page with a single click. For example, you’ll find bookmarklets associated with web services like Twitter, Facebook, Google+, LinkedIn, Pocket, and LastPass. When you click the bookmarklet, it will run code that lets you easily share the current page with that service. Bookmarklets don’t just have to be  associated with web services. A bookmarklet you click could modify the appearance of the page, stripping away most of the junk and giving you a clean “reading mode.” It could alter fonts, remove images, or insert other content. It can access anything the web page could access. For example, you could use a bookmarklet to reveal a password that just appears as ******* on the page. Unlike browser extensions, bookmarklets don’t run in the background and bog down your browser. They don’t do anything at all until you click them. Because they just use the standard bookmark system, they can also be used in mobile browsers where you couldn’t run extensions. For example, you could install the Pocket bookmarklet in Safari on an iPad and get an “Add to Pocket” option in Safari. Safari doesn’t offer browsing extensions and Apple’s iOS doesn’t offer a “Share” feature like Android and Windows 8 do, so this is the only way to get this direct integration. You could even use the LastPass bookmarklets in Safari on an iPad to integrate LastPass with the Safari web browser. Where to Find Bookmarklets If you’re looking for a bookmarklet for a particular service, you’ll generally find the bookmarklet on that service’s site. Websites like Twitter, Facebook, and Pocket host pages where they provide bookmarklets along with browser extensions. Bookmarklets aren’t like programs. They’re really just a piece of text that you can put in a bookmarklet, so you don’t have to download them a specific site. You can get them from practically anywhere — installing them just involves copying a bit of text off of a web page. For example, you can just search the web for “reveal password bookmarklet” if you wanted a bookmarklet that will reveal passwords. We’ve covered many of the must-have bookmarklets — and our readers have chimed in too — so take a look at our lists for more examples. How to Install a Bookmarklet Bookmarklets are simple to install. When you hover over a bookmarklet on a web page, you’ll see its address begins with “javascript:”. If you have your web browser’s bookmark or favorites toolbar visible, the easiest way to install a bookmarklet is with drag-and-drop. Press Ctrl+Shift+B to show your bookmarks toolbar if you’re using Chrome or Internet Explorer. In Firefox, right-click the toolbar and click Bookmarks Toolbar. Just drag and drop this link to your bookmark toolbar. The bookmarklet is now installed. You can also install bookmarklets manually. Select the bookmarklet’s code and copy it to your clipboard. If the bookmarklet is a link, right-click or long-press the link and copy its address to your clipboard. Open your browser’s bookmarks manager, add a bookmark, and paste the JavaScript code directly into the address box. Give your bookmarklet a name and save it. How to Use a Bookmarklet Bookmarklets are easiest to use if you have your browser’s bookmarks toolbar enabled. Just click the bookmarklet and your browser will run it on the current page. If you don’t have a bookmarks toolbar — such as on Safari on an iPad or another mobile browser — just open your browser’s bookmarks pane and tap or click the bookmark. In mobile Chrome, you’ll need to launch the bookmarklet from the location bar. Open the web page you want to run the bookmarklet on, tap your location bar, and start searching for the name of the bookmarklet. Tap the bookmarklet’s name to run it on the current page. Note that the bookmarklet only appears here because we have it saved as a bookmark in Chrome. You’ll need to add the bookmarklet to your browser’s bookmarks before you can use it in this way. The location bar approach may also be necessary in other browsers. The trick is loading the bookmark so that it will be associated with your current tab. You can’t just open your bookmarks in a separate browser tab and run the bookmarklet from there — it will run on that other browser tab. Bookmarklets are powerful and flexible. While they’re not as flashy as browser extensions, they’re much more lightweight and allow you to get extension-like features in more limited mobile browsers.

    Read the article

  • Overriding LINQ extension methods

    - by Ruben Vermeersch
    Is there a way to override extension methods (provide a better implementation), without explicitly having to cast to them? I'm implementing a data type that is able to handle certain operations more efficiently than the default extension methods, but I'd like to keep the generality of IEnumerable. That way any IEnumerable can be passed, but when my class is passed in, it should be more efficient. As a toy example, consider the following: // Compile: dmcs -out:test.exe test.cs using System; namespace Test { public interface IBoat { void Float (); } public class NiceBoat : IBoat { public void Float () { Console.WriteLine ("NiceBoat floating!"); } } public class NicerBoat : IBoat { public void Float () { Console.WriteLine ("NicerBoat floating!"); } public void BlowHorn () { Console.WriteLine ("NicerBoat: TOOOOOT!"); } } public static class BoatExtensions { public static void BlowHorn (this IBoat boat) { Console.WriteLine ("Patched on horn for {0}: TWEET", boat.GetType().Name); } } public class TestApp { static void Main (string [] args) { IBoat niceboat = new NiceBoat (); IBoat nicerboat = new NicerBoat (); Console.WriteLine ("## Both should float:"); niceboat.Float (); nicerboat.Float (); // Output: // NiceBoat floating! // NicerBoat floating! Console.WriteLine (); Console.WriteLine ("## One has an awesome horn:"); niceboat.BlowHorn (); nicerboat.BlowHorn (); // Output: // Patched on horn for NiceBoat: TWEET // Patched on horn for NicerBoat: TWEET Console.WriteLine (); Console.WriteLine ("## That didn't work, but it does when we cast:"); (niceboat as NiceBoat).BlowHorn (); (nicerboat as NicerBoat).BlowHorn (); // Output: // Patched on horn for NiceBoat: TWEET // NicerBoat: TOOOOOT! Console.WriteLine (); Console.WriteLine ("## Problem is: I don't always know the type of the objects."); Console.WriteLine ("## How can I make it use the class objects when the are"); Console.WriteLine ("## implemented and extension methods when they are not,"); Console.WriteLine ("## without having to explicitely cast?"); } } } Is there a way to get the behavior from the second case, without explict casting? Can this problem be avoided?

    Read the article

  • Tailoring the Oracle Fusion Applications User Interface with Oracle Composer

    - by mvaughan
    By Killian Evers, Oracle Applications User Experience Changing the user interface (UI) is one of the most common modifications customers perform to Oracle Fusion Applications. Typically, customers add or remove a field based on their needs. Oracle makes the process of tailoring easier for customers, and reduces the burden for their IT staff, which you can read about on the Usable Apps website or in an earlier VoX post.This is the first in a series of posts that will talk about the tools that Oracle has provided for tailoring with its family of composers. These tools are designed for business systems analysts, and they allow employees other than IT staff to make changes in an upgrade-safe and patch-friendly manner. Let’s take a deep dive into one of these composers, the Oracle Composer. Oracle Composer allows business users to modify existing UIs after they have been deployed and are in use. It is an integral component of our SaaS offering. Using Oracle Composer, users can control:     •    Who sees the changes     •    When the changes are made     •    What changes are made Change for me, change for you, change for all of youOne of the most powerful aspects of Oracle Composer is its flexibility. Oracle uses Oracle Composer to make changes for a user or group of users – those who see the changes. A user of Oracle Fusion Applications can make changes to the user interface at runtime via Oracle Composer, and these changes will remain every time they log into the system. For example, they can rearrange certain objects on a page, add and remove designated content, and save queries.Business systems analysts can make changes to Oracle Fusion Application UIs for groups of users or all users. Oracle’s Fusion Middleware Metadata Services (MDS) stores these changes and retrieves them at runtime, merging customizations with the base metadata and revealing the final experience to the end user. A tailored application can have multiple customization layers, and some layers can be specific to certain Fusion Applications. Some examples of customization layers are: site, organization, country, or role. Customization layers are applied in a specific order of precedence on top of the base application metadata. This image illustrates how customization layers are applied.What time is it?Users make changes to UIs at design time, runtime, and design time at runtime. Design time changes are typically made by application developers using an integrated development environment, or IDE, such as Oracle JDeveloper. Once made, these changes are then deployed to managed servers by application administrators. Oracle Composer covers the other two areas: Runtime changes and design time at runtime changes. When we say users are making changes at runtime, we mean that the changes are made within the running application and take effect immediately in the running application. A prime example of this ability is users who make changes to their running application that only affect the UIs they see. What is new with Oracle Composer is the last area: Design time at runtime.  A business systems analyst can make changes to the UIs at runtime but does not have to make those changes immediately to the application. These changes are stored as metadata, separate from the base application definitions. Customizations made at runtime can be saved in a sandbox so that the changes can be isolated and validated before being published into an environment, without the need to redeploy the application. What can I do?Oracle Composer can be run in one of two modes. Depending on which mode is chosen, you may have different capabilities available for changing the UIs. The first mode is view mode, the most common default mode for most pages. This is the mode that is used for personalizations or user customizations. Users can access this mode via the Personalization link (see below) in the global region on Oracle Fusion Applications pages. In this mode, you can rearrange components on a page with drag-and-drop, collapse or expand components, add approved external content, and change the overall layout of a page. However, all of the changes made this way are exclusive to that particular user.The second mode, edit mode, is typically made available to select users with access privileges to edit page content. We call these folks business systems analysts. This mode is used to make UI changes for groups of users. Users with appropriate privileges can access the edit mode of Oracle Composer via the Administration menu (see below) in the global region on Oracle Fusion Applications pages. In edit mode, users can also add components, delete components, and edit component properties. While in edit mode in Oracle Composer, there are two views that assist the business systems analyst with making UI changes: Design View and Source View (see below). Design View, the default view, is a WYSIWYG rendering of the page and its content. The business systems analyst can perform these actions: Add content – including custom content like a portlet displaying news or stock quotes, or predefined content delivered from Oracle Fusion Applications (including ADF components and task flows) Rearrange content – performed via drag-and-drop on the page or by using the actions menu of a component or portlet to move content around Edit component properties and parameters – for specific components, control the visual properties such as text or display labels, or parameters such as RSS feeds Hide or show components – hidden components can be re-shown Delete components Change page layout – users can select from eight pre-defined layouts Edit page properties – create or edit a page’s parameters and display properties Reset page customizations – remove edits made to the page in the current layer and/or reset the page to a previous state. Detailed information on each of these capabilities and the additional actions not covered in the list above can be found in the Oracle® Fusion Middleware Developer's Guide for Oracle WebCenter.This image shows what the screen looks like in Design View.Source View, the second option in the edit mode of Oracle Composer, provides a WYSIWYG and a hierarchical rendering of page components in a component navigator. In Source View, users can access and modify properties of components that are not otherwise selectable in Design View. For example, many ADF Faces components can be edited only in Source View. Users can also edit components within a task flow. This image shows what the screen looks like in Source View.Detailed information on Source View can be found in the Oracle® Fusion Middleware Developer's Guide for Oracle WebCenter.Oracle Composer enables any application or portal to be customized or personalized after it has been deployed and is in use. It is designed to be extremely easy to use so that both business systems analysts and users can edit Oracle Fusion Applications pages with a few clicks of the mouse. Oracle Composer runs in all modern browsers and provides a rich, dynamic way to edit JSF application and portal pages.From the editor: The next post in this series about composers will be on Data Composer. You can also catch Killian speaking about extensibility at OpenWorld 2012 and in her Faces of Fusion video.

    Read the article

  • Optimizing AES modes on Solaris for Intel Westmere

    - by danx
    Optimizing AES modes on Solaris for Intel Westmere Review AES is a strong method of symmetric (secret-key) encryption. It is a U.S. FIPS-approved cryptographic algorithm (FIPS 197) that operates on 16-byte blocks. AES has been available since 2001 and is widely used. However, AES by itself has a weakness. AES encryption isn't usually used by itself because identical blocks of plaintext are always encrypted into identical blocks of ciphertext. This encryption can be easily attacked with "dictionaries" of common blocks of text and allows one to more-easily discern the content of the unknown cryptotext. This mode of encryption is called "Electronic Code Book" (ECB), because one in theory can keep a "code book" of all known cryptotext and plaintext results to cipher and decipher AES. In practice, a complete "code book" is not practical, even in electronic form, but large dictionaries of common plaintext blocks is still possible. Here's a diagram of encrypting input data using AES ECB mode: Block 1 Block 2 PlainTextInput PlainTextInput | | | | \/ \/ AESKey-->(AES Encryption) AESKey-->(AES Encryption) | | | | \/ \/ CipherTextOutput CipherTextOutput Block 1 Block 2 What's the solution to the same cleartext input producing the same ciphertext output? The solution is to further process the encrypted or decrypted text in such a way that the same text produces different output. This usually involves an Initialization Vector (IV) and XORing the decrypted or encrypted text. As an example, I'll illustrate CBC mode encryption: Block 1 Block 2 PlainTextInput PlainTextInput | | | | \/ \/ IV >----->(XOR) +------------->(XOR) +---> . . . . | | | | | | | | \/ | \/ | AESKey-->(AES Encryption) | AESKey-->(AES Encryption) | | | | | | | | | \/ | \/ | CipherTextOutput ------+ CipherTextOutput -------+ Block 1 Block 2 The steps for CBC encryption are: Start with a 16-byte Initialization Vector (IV), choosen randomly. XOR the IV with the first block of input plaintext Encrypt the result with AES using a user-provided key. The result is the first 16-bytes of output cryptotext. Use the cryptotext (instead of the IV) of the previous block to XOR with the next input block of plaintext Another mode besides CBC is Counter Mode (CTR). As with CBC mode, it also starts with a 16-byte IV. However, for subsequent blocks, the IV is just incremented by one. Also, the IV ix XORed with the AES encryption result (not the plain text input). Here's an illustration: Block 1 Block 2 PlainTextInput PlainTextInput | | | | \/ \/ AESKey-->(AES Encryption) AESKey-->(AES Encryption) | | | | \/ \/ IV >----->(XOR) IV + 1 >---->(XOR) IV + 2 ---> . . . . | | | | \/ \/ CipherTextOutput CipherTextOutput Block 1 Block 2 Optimization Which of these modes can be parallelized? ECB encryption/decryption can be parallelized because it does more than plain AES encryption and decryption, as mentioned above. CBC encryption can't be parallelized because it depends on the output of the previous block. However, CBC decryption can be parallelized because all the encrypted blocks are known at the beginning. CTR encryption and decryption can be parallelized because the input to each block is known--it's just the IV incremented by one for each subsequent block. So, in summary, for ECB, CBC, and CTR modes, encryption and decryption can be parallelized with the exception of CBC encryption. How do we parallelize encryption? By interleaving. Usually when reading and writing data there are pipeline "stalls" (idle processor cycles) that result from waiting for memory to be loaded or stored to or from CPU registers. Since the software is written to encrypt/decrypt the next data block where pipeline stalls usually occurs, we can avoid stalls and crypt with fewer cycles. This software processes 4 blocks at a time, which ensures virtually no waiting ("stalling") for reading or writing data in memory. Other Optimizations Besides interleaving, other optimizations performed are Loading the entire key schedule into the 128-bit %xmm registers. This is done once for per 4-block of data (since 4 blocks of data is processed, when present). The following is loaded: the entire "key schedule" (user input key preprocessed for encryption and decryption). This takes 11, 13, or 15 registers, for AES-128, AES-192, and AES-256, respectively The input data is loaded into another %xmm register The same register contains the output result after encrypting/decrypting Using SSSE 4 instructions (AESNI). Besides the aesenc, aesenclast, aesdec, aesdeclast, aeskeygenassist, and aesimc AESNI instructions, Intel has several other instructions that operate on the 128-bit %xmm registers. Some common instructions for encryption are: pxor exclusive or (very useful), movdqu load/store a %xmm register from/to memory, pshufb shuffle bytes for byte swapping, pclmulqdq carry-less multiply for GCM mode Combining AES encryption/decryption with CBC or CTR modes processing. Instead of loading input data twice (once for AES encryption/decryption, and again for modes (CTR or CBC, for example) processing, the input data is loaded once as both AES and modes operations occur at in the same function Performance Everyone likes pretty color charts, so here they are. I ran these on Solaris 11 running on a Piketon Platform system with a 4-core Intel Clarkdale processor @3.20GHz. Clarkdale which is part of the Westmere processor architecture family. The "before" case is Solaris 11, unmodified. Keep in mind that the "before" case already has been optimized with hand-coded Intel AESNI assembly. The "after" case has combined AES-NI and mode instructions, interleaved 4 blocks at-a-time. « For the first table, lower is better (milliseconds). The first table shows the performance improvement using the Solaris encrypt(1) and decrypt(1) CLI commands. I encrypted and decrypted a 1/2 GByte file on /tmp (swap tmpfs). Encryption improved by about 40% and decryption improved by about 80%. AES-128 is slighty faster than AES-256, as expected. The second table shows more detail timings for CBC, CTR, and ECB modes for the 3 AES key sizes and different data lengths. » The results shown are the percentage improvement as shown by an internal PKCS#11 microbenchmark. And keep in mind the previous baseline code already had optimized AESNI assembly! The keysize (AES-128, 192, or 256) makes little difference in relative percentage improvement (although, of course, AES-128 is faster than AES-256). Larger data sizes show better improvement than 128-byte data. Availability This software is in Solaris 11 FCS. It is available in the 64-bit libcrypto library and the "aes" Solaris kernel module. You must be running hardware that supports AESNI (for example, Intel Westmere and Sandy Bridge, microprocessor architectures). The easiest way to determine if AES-NI is available is with the isainfo(1) command. For example, $ isainfo -v 64-bit amd64 applications pclmulqdq aes sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov amd_sysc cx8 tsc fpu 32-bit i386 applications pclmulqdq aes sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov sep cx8 tsc fpu No special configuration or setup is needed to take advantage of this software. Solaris libraries and kernel automatically determine if it's running on AESNI-capable machines and execute the correctly-tuned software for the current microprocessor. Summary Maximum throughput of AES cipher modes can be achieved by combining AES encryption with modes processing, interleaving encryption of 4 blocks at a time, and using Intel's wide 128-bit %xmm registers and instructions. References "Block cipher modes of operation", Wikipedia Good overview of AES modes (ECB, CBC, CTR, etc.) "Advanced Encryption Standard", Wikipedia "Current Modes" describes NIST-approved block cipher modes (ECB,CBC, CFB, OFB, CCM, GCM)

    Read the article

  • C# Confusing Results from Performance Test

    - by aip.cd.aish
    I am currently working on an image processing application. The application captures images from a webcam and then does some processing on it. The app needs to be real time responsive (ideally < 50ms to process each request). I have been doing some timing tests on the code I have and I found something very interesting (see below). clearLog(); log("Log cleared"); camera.QueryFrame(); camera.QueryFrame(); log("Camera buffer cleared"); Sensor s = t.val; log("Sx: " + S.X + " Sy: " + S.Y); Image<Bgr, Byte> cameraImage = camera.QueryFrame(); log("Camera output acuired for processing"); Each time the log is called the time since the beginning of the processing is displayed. Here is my log output: [3 ms]Log cleared [41 ms]Camera buffer cleared [41 ms]Sx: 589 Sy: 414 [112 ms]Camera output acuired for processing The timings are computed using a StopWatch from System.Diagonostics. QUESTION 1 I find this slightly interesting, since when the same method is called twice it executes in ~40ms and when it is called once the next time it took longer (~70ms). Assigning the value can't really be taking that long right? QUESTION 2 Also the timing for each step recorded above varies from time to time. The values for some steps are sometimes as low as 0ms and sometimes as high as 100ms. Though most of the numbers seem to be relatively consistent. I guess this may be because the CPU was used by some other process in the mean time? (If this is for some other reason, please let me know) Is there some way to ensure that when this function runs, it gets the highest priority? So that the speed test results will be consistently low (in terms of time). EDIT I change the code to remove the two blank query frames from above, so the code is now: clearLog(); log("Log cleared"); Sensor s = t.val; log("Sx: " + S.X + " Sy: " + S.Y); Image<Bgr, Byte> cameraImage = camera.QueryFrame(); log("Camera output acuired for processing"); The timing results are now: [2 ms]Log cleared [3 ms]Sx: 589 Sy: 414 [5 ms]Camera output acuired for processing The next steps now take longer (sometimes, the next step jumps to after 20-30ms, while the next step was previously almost instantaneous). I am guessing this is due to the CPU scheduling. Is there someway I can ensure the CPU does not get scheduled to do something else while it is running through this code?

    Read the article

  • RIA Services Repository Save does not work!?

    - by Savvas Sopiadis
    Hello everybody! Doing my first SL4 MVVM RIA based application and i ran into the following situation: updating a record (EF4,NO-POCOS!!) in the SL-client seems to take place, but values in the dbms are unchanged. Debugging with Fiddler the message on save is (amongst others): EntityActions.nil? b9http://schemas.microsoft.com/2003/10/Serialization/Arrays^HasMemberChanges?^Id?^ Operation?Update I assume that this says only: hey! the dbms should do an update on this record, AND nothing more! Is that right?! I 'm using a generic repository like this: public class Repository<T> : IRepository<T> where T : class { IObjectSet<T> _objectSet; IObjectContext _objectContext; public Repository(IObjectContext objectContext) { this._objectContext = objectContext; _objectSet = objectContext.CreateObjectSet<T>(); } public IQueryable<T> AsQueryable() { return _objectSet; } public IEnumerable<T> GetAll() { return _objectSet.ToList(); } public IEnumerable<T> Find(Expression<Func<T, bool>> where) { return _objectSet.Where(where); } public T Single(Expression<Func<T, bool>> where) { return _objectSet.Single(where); } public T First(Expression<Func<T, bool>> where) { return _objectSet.First(where); } public void Delete(T entity) { _objectSet.DeleteObject(entity); } public void Add(T entity) { _objectSet.AddObject(entity); } public void Attach(T entity) { _objectSet.Attach(entity); } public void Save() { _objectContext.SaveChanges(); } } The DomainService Update Method is the following: [Update] public void UpdateCulture(Culture currentCulture) { if (currentCulture.EntityState == System.Data.EntityState.Detached) { this.cultureRepository.Attach(currentCulture); } this.cultureRepository.Save(); } I know that the currentCulture-Entity is detached. What confuses me (amongst other things) is this: is the _objectContext still alive? (which means it "will be"??? aware of the changes made to record, so simply calling Attach() and then Save() should be enough!?!?) What am i missing? Development Environment: VS2010RC - Entity Framework 4 (no POCOs) Thanks in advance

    Read the article

  • progressIndicator does not update till about 10 seconds after awakeFromNib occures

    - by theprojectabot
    I have been trying to get this one section of my UI to immediatly up date when the document loads into view. The awakeFromNib fires the pasted code and then starts a timer to repeat every 10 seconds... I load a default storage location: ~/Movies... which shows up immediately.. yet the network location that is saved in the document that gets pulled from the XML only seems to show up after the second firing of the - (void)updateDiskSpaceDisplay timer. I have set breakpoints and know that the ivars that contain the values that are being put into the *fileSystemAttributes is the network location right when the awakeFromNib occurs... Im confused why it magically appears after the second time firing instead of immediately displaying the write values. - (void)updateDiskSpaceDisplay { // Obtain information about the file system used on the selected storage path. NSError *error = NULL; NSDictionary *fileSystemAttributes = [[NSFileManager defaultManager] attributesOfFileSystemForPath:[[[self settings] containerSettings] storagePath] error:&error]; if( !fileSystemAttributes ) return; // Get the byte capacity of the drive. long long byteCapacity = [[fileSystemAttributes objectForKey:NSFileSystemSize] unsignedLongLongValue]; // Get the number of free bytes. long long freeBytes = [[fileSystemAttributes objectForKey:NSFileSystemFreeSize] unsignedLongLongValue]; // Update the level indicator, and the text fields to show the current information. [totalDiskSpaceField setStringValue:[self formattedStringFromByteCount:byteCapacity]]; [totalDiskSpaceField setNeedsDisplay:YES]; [usedDiskSpaceField setStringValue:[self formattedStringFromByteCount:(byteCapacity - freeBytes)]]; [usedDiskSpaceField setNeedsDisplay:YES]; [diskSpaceIndicator setMaxValue:100]; [diskSpaceIndicator setIntValue:(((float) (byteCapacity - freeBytes) / (float) byteCapacity) * 100.0)]; [diskSpaceIndicator display:YES]; } thoughts? my awakeFromNib: - (void)awakeFromNib { [documentWindow setAcceptsMouseMovedEvents:YES]; [documentWindow setDelegate:self]; [self updateSettingsDisplay]; [self updateDiskSpaceDisplay]; [self setDiskSpaceUpdateTimer:[NSTimer scheduledTimerWithTimeInterval:10.0 target:self selector:@selector(updateDiskSpaceDisplay) userInfo:NULL repeats:YES]]; [self setUpClipInfoTabButtons]; [self performSelector:@selector(setupEngineController) withObject:NULL afterDelay:0.1]; }

    Read the article

  • External usb 3.0 hard drive is not recognised when plugged into usb 3 port (ubuntu natty 64 bit).

    - by kimangroo
    I have an Iomega Prestige Portable External Hard Drive 1TB USB 3.0. It works fine on windows 7 as a usb 3.0 drive. It isn't detected on ubuntu natty 64bit, 2.6.38-8-generic. fdisk -l cannot see it at all: Disk /dev/sda: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x1bed746b Device Boot Start End Blocks Id System /dev/sda1 1 1689 13560832 27 Unknown /dev/sda2 * 1689 1702 102400 7 HPFS/NTFS /dev/sda3 1702 19978 146805760 7 HPFS/NTFS /dev/sda4 19978 60802 327914497 5 Extended /dev/sda5 25555 60802 283120640 7 HPFS/NTFS /dev/sda6 19978 23909 31571968 83 Linux /dev/sda7 23909 25555 13218816 82 Linux swap / Solaris Partition table entries are not in disk order lsusb can see it: Bus 003 Device 003: ID 059b:0070 Iomega Corp. Bus 003 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 002 Device 004: ID 05fe:0011 Chic Technology Corp. Browser Mouse Bus 002 Device 003: ID 0a12:0001 Cambridge Silicon Radio, Ltd Bluetooth Dongle (HCI mode) Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 005: ID 0489:e00f Foxconn / Hon Hai Bus 001 Device 004: ID 0c45:64b5 Microdia Bus 001 Device 003: ID 08ff:168f AuthenTec, Inc. Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub And dmesg | grep -i xhci (I may have unplugged the drive and plugged it back in again after booting): [ 1.659060] pci 0000:04:00.0: xHCI HW did not halt within 2000 usec status = 0x0 [ 11.484971] xhci_hcd 0000:04:00.0: PCI INT A -> GSI 18 (level, low) -> IRQ 18 [ 11.484997] xhci_hcd 0000:04:00.0: setting latency timer to 64 [ 11.485002] xhci_hcd 0000:04:00.0: xHCI Host Controller [ 11.485064] xhci_hcd 0000:04:00.0: new USB bus registered, assigned bus number 3 [ 11.636149] xhci_hcd 0000:04:00.0: irq 18, io mem 0xc5400000 [ 11.636241] xhci_hcd 0000:04:00.0: irq 43 for MSI/MSI-X [ 11.636246] xhci_hcd 0000:04:00.0: irq 44 for MSI/MSI-X [ 11.636251] xhci_hcd 0000:04:00.0: irq 45 for MSI/MSI-X [ 11.636256] xhci_hcd 0000:04:00.0: irq 46 for MSI/MSI-X [ 11.636261] xhci_hcd 0000:04:00.0: irq 47 for MSI/MSI-X [ 11.639654] xHCI xhci_add_endpoint called for root hub [ 11.639655] xHCI xhci_check_bandwidth called for root hub [ 11.956366] usb 3-1: new SuperSpeed USB device using xhci_hcd and address 2 [ 12.001073] xhci_hcd 0000:04:00.0: WARN: short transfer on control ep [ 12.007059] xhci_hcd 0000:04:00.0: WARN: short transfer on control ep [ 12.012932] xhci_hcd 0000:04:00.0: WARN: short transfer on control ep [ 12.018922] xhci_hcd 0000:04:00.0: WARN: short transfer on control ep [ 12.049139] xhci_hcd 0000:04:00.0: WARN: short transfer on control ep [ 12.056754] xhci_hcd 0000:04:00.0: WARN: short transfer on control ep [ 12.131607] xhci_hcd 0000:04:00.0: WARN no SS endpoint bMaxBurst [ 12.179717] xhci_hcd 0000:04:00.0: WARN: short transfer on control ep [ 12.686876] xhci_hcd 0000:04:00.0: WARN: babble error on endpoint [ 12.687058] xhci_hcd 0000:04:00.0: WARN Set TR Deq Ptr cmd invalid because of stream ID configuration [ 12.687152] xhci_hcd 0000:04:00.0: ERROR Transfer event for disabled endpoint or incorrect stream ring [ 43.330737] usb 3-1: reset SuperSpeed USB device using xhci_hcd and address 2 [ 43.422579] xhci_hcd 0000:04:00.0: WARN: short transfer on control ep [ 43.422658] xhci_hcd 0000:04:00.0: xHCI xhci_drop_endpoint called with disabled ep ffff88014669af00 [ 43.422665] xhci_hcd 0000:04:00.0: xHCI xhci_drop_endpoint called with disabled ep ffff88014669af40 [ 43.422671] xhci_hcd 0000:04:00.0: xHCI xhci_drop_endpoint called with disabled ep ffff88014669af80 [ 43.422677] xhci_hcd 0000:04:00.0: xHCI xhci_drop_endpoint called with disabled ep ffff88014669afc0 [ 43.531159] xhci_hcd 0000:04:00.0: WARN no SS endpoint bMaxBurst [ 125.160248] xhci_hcd 0000:04:00.0: WARN no SS endpoint bMaxBurst [ 903.766466] usb 3-1: new SuperSpeed USB device using xhci_hcd and address 3 [ 903.807789] xhci_hcd 0000:04:00.0: WARN: short transfer on control ep [ 903.813530] xhci_hcd 0000:04:00.0: WARN: short transfer on control ep [ 903.819400] xhci_hcd 0000:04:00.0: WARN: short transfer on control ep [ 903.825104] xhci_hcd 0000:04:00.0: WARN: short transfer on control ep [ 903.855067] xhci_hcd 0000:04:00.0: WARN: short transfer on control ep [ 903.862314] xhci_hcd 0000:04:00.0: WARN: short transfer on control ep [ 903.862597] xhci_hcd 0000:04:00.0: WARN no SS endpoint bMaxBurst [ 903.913211] xhci_hcd 0000:04:00.0: WARN: short transfer on control ep [ 904.424416] xhci_hcd 0000:04:00.0: WARN: babble error on endpoint [ 904.424599] xhci_hcd 0000:04:00.0: WARN Set TR Deq Ptr cmd invalid because of stream ID configuration [ 904.424700] xhci_hcd 0000:04:00.0: ERROR Transfer event for disabled endpoint or incorrect stream ring [ 935.139021] usb 3-1: reset SuperSpeed USB device using xhci_hcd and address 3 [ 935.226075] xhci_hcd 0000:04:00.0: WARN: short transfer on control ep [ 935.226140] xhci_hcd 0000:04:00.0: xHCI xhci_drop_endpoint called with disabled ep ffff880148186b00 [ 935.226148] xhci_hcd 0000:04:00.0: xHCI xhci_drop_endpoint called with disabled ep ffff880148186b40 [ 935.226153] xhci_hcd 0000:04:00.0: xHCI xhci_drop_endpoint called with disabled ep ffff880148186b80 [ 935.226159] xhci_hcd 0000:04:00.0: xHCI xhci_drop_endpoint called with disabled ep ffff880148186bc0 [ 935.343339] xhci_hcd 0000:04:00.0: WARN no SS endpoint bMaxBurst I thought it might be that the firmware wasn't compatible with linux or something, but when booting a live image of partedmagic, (2.6.38.4-pmagic), the drive was detected fine, I could mount it and got usb 3.0 speeds (at least they double the speeds I got from plugging same drive in usb 2 ports). dmesg in partedmagic did say something about no SuperSpeed endpoint which was an error I saw in a previous dmesg of ubuntu: Jun 27 15:49:02 (none) user.info kernel: [ 2.978743] xhci_hcd 0000:04:00.0: PCI INT A -> GSI 18 (level, low) -> IRQ 18 Jun 27 15:49:02 (none) user.debug kernel: [ 2.978771] xhci_hcd 0000:04:00.0: setting latency timer to 64 Jun 27 15:49:02 (none) user.info kernel: [ 2.978781] xhci_hcd 0000:04:00.0: xHCI Host Controller Jun 27 15:49:02 (none) user.info kernel: [ 2.978856] xhci_hcd 0000:04:00.0: new USB bus registered, assigned bus number 3 Jun 27 15:49:02 (none) user.info kernel: [ 3.089458] xhci_hcd 0000:04:00.0: irq 18, io mem 0xc5400000 Jun 27 15:49:02 (none) user.debug kernel: [ 3.089541] xhci_hcd 0000:04:00.0: irq 42 for MSI/MSI-X Jun 27 15:49:02 (none) user.debug kernel: [ 3.089544] xhci_hcd 0000:04:00.0: irq 43 for MSI/MSI-X Jun 27 15:49:02 (none) user.debug kernel: [ 3.089546] xhci_hcd 0000:04:00.0: irq 44 for MSI/MSI-X Jun 27 15:49:02 (none) user.debug kernel: [ 3.089548] xhci_hcd 0000:04:00.0: irq 45 for MSI/MSI-X Jun 27 15:49:02 (none) user.debug kernel: [ 3.089550] xhci_hcd 0000:04:00.0: irq 46 for MSI/MSI-X Jun 27 15:49:02 (none) user.warn kernel: [ 3.092857] usb usb3: No SuperSpeed endpoint companion for config 1 interface 0 altsetting 0 ep 129: using minimum values Jun 27 15:49:02 (none) user.info kernel: [ 3.092864] usb usb3: New USB device found, idVendor=1d6b, idProduct=0003 Jun 27 15:49:02 (none) user.info kernel: [ 3.092866] usb usb3: New USB device strings: Mfr=3, Product=2, SerialNumber=1 Jun 27 15:49:02 (none) user.info kernel: [ 3.092867] usb usb3: Product: xHCI Host Controller Jun 27 15:49:02 (none) user.info kernel: [ 3.092869] usb usb3: Manufacturer: Linux 2.6.38.4-pmagic xhci_hcd Jun 27 15:49:02 (none) user.info kernel: [ 3.092870] usb usb3: SerialNumber: 0000:04:00.0 Jun 27 15:49:02 (none) user.debug kernel: [ 3.092961] xHCI xhci_add_endpoint called for root hub Jun 27 15:49:02 (none) user.debug kernel: [ 3.092963] xHCI xhci_check_bandwidth called for root hub Well I have no idea what's going wrong, and I haven't had much luck from google and the forums so far. A number of unanswered threads with people with similar error messages and problems only. Hopefully someone here can help or point me in the right direction?!

    Read the article

  • Android App to call a number on button click

    - by FosterZ
    hey guys this is my 1st android app(learning), so i want to call a number given in the textbox but i'm getting error as "The application 'xyz'(process com.adroid) has stoped unexpectedly".. following the code i have done so far... where m doing wrong ?? EditText txtPhn; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); Button callButton = (Button)findViewById(R.id.btnCall); txtPhn = (EditText)findViewById(R.id.txtPhnNumber); callButton.setOnClickListener(new OnClickListener() { public void onClick(View v) { try { Intent callIntent = new Intent(Intent.ACTION_CALL); callIntent.setData(Uri.parse("tel:"+txtPhn.getText().toString())); startActivity(callIntent); } catch (ActivityNotFoundException activityException) { Log.e("Calling a Phone Number", "Call failed", activityException); } } }); } EDITED LogCat 03-09 11:23:25.874: ERROR/AndroidRuntime(370): FATAL EXCEPTION: main 03-09 11:23:25.874: ERROR/AndroidRuntime(370): java.lang.SecurityException: Permission Denial: starting Intent { act=android.intent.action.CALL dat=tel:xxx-xxx-xxxx flg=0x10000000 cmp=com.android.phone/.OutgoingCallBroadcaster } from ProcessRecord{40738d70 370:org.krish.android/10034} (pid=370, uid=10034) requires android.permission.CALL_PHONE 03-09 11:23:25.874: ERROR/AndroidRuntime(370): at android.os.Parcel.readException(Parcel.java:1322) 03-09 11:23:25.874: ERROR/AndroidRuntime(370): at android.os.Parcel.readException(Parcel.java:1276) 03-09 11:23:25.874: ERROR/AndroidRuntime(370): at android.app.ActivityManagerProxy.startActivity(ActivityManagerNative.java:1351) 03-09 11:23:25.874: ERROR/AndroidRuntime(370): at android.app.Instrumentation.execStartActivity(Instrumentation.java:1374) 03-09 11:23:25.874: ERROR/AndroidRuntime(370): at android.app.Activity.startActivityForResult(Activity.java:2827) 03-09 11:23:25.874: ERROR/AndroidRuntime(370): at android.app.Activity.startActivity(Activity.java:2933) 03-09 11:23:25.874: ERROR/AndroidRuntime(370): at org.krish.android.caller$1.onClick(caller.java:29) 03-09 11:23:25.874: ERROR/AndroidRuntime(370): at android.view.View.performClick(View.java:2485) 03-09 11:23:25.874: ERROR/AndroidRuntime(370): at android.view.View$PerformClick.run(View.java:9080) 03-09 11:23:25.874: ERROR/AndroidRuntime(370): at android.os.Handler.handleCallback(Handler.java:587) 03-09 11:23:25.874: ERROR/AndroidRuntime(370): at android.os.Handler.dispatchMessage(Handler.java:92) 03-09 11:23:25.874: ERROR/AndroidRuntime(370): at android.os.Looper.loop(Looper.java:123) 03-09 11:23:25.874: ERROR/AndroidRuntime(370): at android.app.ActivityThread.main(ActivityThread.java:3683) 03-09 11:23:25.874: ERROR/AndroidRuntime(370): at java.lang.reflect.Method.invokeNative(Native Method) 03-09 11:23:25.874: ERROR/AndroidRuntime(370): at java.lang.reflect.Method.invoke(Method.java:507) 03-09 11:23:25.874: ERROR/AndroidRuntime(370): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:839) 03-09 11:23:25.874: ERROR/AndroidRuntime(370): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:597) 03-09 11:23:25.874: ERROR/AndroidRuntime(370): at dalvik.system.NativeStart.main(Native Method)

    Read the article

  • Sending large serialized objects over sockets is failing only when trying to grow the byte Array, bu

    - by FinancialRadDeveloper
    I have code where I am trying to grow the byte array while receiving the data over my socket. This is erroring out. public bool ReceiveObject2(ref Object objRec, ref string sErrMsg) { try { byte[] buffer = new byte[1024]; byte[] byArrAll = new byte[0]; bool bAllBytesRead = false; int iRecLoop = 0; // grow the byte array to match the size of the object, so we can put whatever we // like through the socket as long as the object serialises and is binary formatted while (!bAllBytesRead) { if (m_socClient.Receive(buffer) > 0) { byArrAll = Combine(byArrAll, buffer); iRecLoop++; } else { m_socClient.Close(); bAllBytesRead = true; } } MemoryStream ms = new MemoryStream(buffer); BinaryFormatter bf1 = new BinaryFormatter(); ms.Position = 0; Object obj = bf1.Deserialize(ms); objRec = obj; return true; } catch (System.Runtime.Serialization.SerializationException se) { objRec = null; sErrMsg += "SocketClient.ReceiveObject " + "Source " + se.Source + "Error : " + se.Message; return false; } catch (Exception e) { objRec = null; sErrMsg += "SocketClient.ReceiveObject " + "Source " + e.Source + "Error : " + e.Message; return false; } } private byte[] Combine(byte[] first, byte[] second) { byte[] ret = new byte[first.Length + second.Length]; Buffer.BlockCopy(first, 0, ret, 0, first.Length); Buffer.BlockCopy(second, 0, ret, first.Length, second.Length); return ret; } Error: mscorlibError : The input stream is not a valid binary format. The starting contents (in bytes) are: 68-61-73-43-68-61-6E-67-65-73-3D-22-69-6E-73-65-72 ... Yet when I just cheat and use a MASSIVE buffer size its fine. public bool ReceiveObject(ref Object objRec, ref string sErrMsg) { try { byte[] buffer = new byte[5000000]; m_socClient.Receive(buffer); MemoryStream ms = new MemoryStream(buffer); BinaryFormatter bf1 = new BinaryFormatter(); ms.Position = 0; Object obj = bf1.Deserialize(ms); objRec = obj; return true; } catch (Exception e) { objRec = null; sErrMsg += "SocketClient.ReceiveObject " + "Source " + e.Source + "Error : " + e.Message; return false; } } This is really killing me. I don't know why its not working. I have lifted the Combine from a suggestion on here too, so I am pretty sure this is not doing the wrong thing? I hope someone can point out where I am going wrong

    Read the article

  • C# - WebBrowser control seems to cache screenshots

    - by Justin
    Hey, I'm using the WebBrowser control in an ASP.NET MVC 2 app (don't judge, I'm doing it in an admin section only to be used by me), here's the code: public static class Screenshot { private static string _url; private static int _width; private static byte[] _bytes; public static byte[] Get(string url) { // This method gets a screenshot of the webpage // rendered at its full size (height and width) return Get(url, 50); } public static byte[] Get(string url, int width) { //set properties. _url = url; _width = width; //start screen scraper. var webBrowseThread = new Thread(new ThreadStart(TakeScreenshot)); webBrowseThread.SetApartmentState(ApartmentState.STA); webBrowseThread.Start(); //check every second if it got the screenshot yet. //i know, the thread sleep is terrible, but it's the secure section, don't judge... int numChecks = 20; for (int k = 0; k < numChecks; k++) { Thread.Sleep(1000); if (_bytes != null) { return _bytes; } } return null; } private static void TakeScreenshot() { try { //load the webpage into a WebBrowser control. using (WebBrowser wb = new WebBrowser()) { wb.ScrollBarsEnabled = false; wb.ScriptErrorsSuppressed = true; wb.Navigate(_url); while (wb.ReadyState != WebBrowserReadyState.Complete) { Application.DoEvents(); } //set the size of the WebBrowser control. //take Screenshot of the web pages full width. wb.Width = wb.Document.Body.ScrollRectangle.Width; //take Screenshot of the web pages full height. wb.Height = wb.Document.Body.ScrollRectangle.Height; //get a Bitmap representation of the webpage as it's rendered in the WebBrowser control. var bitmap = new Bitmap(wb.Width, wb.Height); wb.DrawToBitmap(bitmap, new Rectangle(0, 0, wb.Width, wb.Height)); //resize. var height = _width * (bitmap.Height / bitmap.Width); var thumbnail = bitmap.GetThumbnailImage(_width, height, null, IntPtr.Zero); //convert to byte array. var ms = new MemoryStream(); thumbnail.Save(ms, System.Drawing.Imaging.ImageFormat.Jpeg); _bytes = ms.ToArray(); } } catch(Exception exc) {//TODO: why did screenshot fail? string message = exc.Message; } } This works fine for the first screenshot that I take, however if I try to take subsequent screenshots of different URL's, it saves screenshots of the first url for the new url, or sometimes it'll save the screenshot from 3 or 4 url's ago. I'm creating a new instance of WebBrowser for each screenshot and am disposing of it properly with the "using" block, any idea why it's behaving this way? Thanks, Justin

    Read the article

  • C#: LINQ vs foreach - Round 1.

    - by James Michael Hare
    So I was reading Peter Kellner's blog entry on Resharper 5.0 and its LINQ refactoring and thought that was very cool.  But that raised a point I had always been curious about in my head -- which is a better choice: manual foreach loops or LINQ?    The answer is not really clear-cut.  There are two sides to any code cost arguments: performance and maintainability.  The first of these is obvious and quantifiable.  Given any two pieces of code that perform the same function, you can run them side-by-side and see which piece of code performs better.   Unfortunately, this is not always a good measure.  Well written assembly language outperforms well written C++ code, but you lose a lot in maintainability which creates a big techncial debt load that is hard to offset as the application ages.  In contrast, higher level constructs make the code more brief and easier to understand, hence reducing technical cost.   Now, obviously in this case we're not talking two separate languages, we're comparing doing something manually in the language versus using a higher-order set of IEnumerable extensions that are in the System.Linq library.   Well, before we discuss any further, let's look at some sample code and the numbers.  First, let's take a look at the for loop and the LINQ expression.  This is just a simple find comparison:       // find implemented via LINQ     public static bool FindViaLinq(IEnumerable<int> list, int target)     {         return list.Any(item => item == target);     }         // find implemented via standard iteration     public static bool FindViaIteration(IEnumerable<int> list, int target)     {         foreach (var i in list)         {             if (i == target)             {                 return true;             }         }           return false;     }   Okay, looking at this from a maintainability point of view, the Linq expression is definitely more concise (8 lines down to 1) and is very readable in intention.  You don't have to actually analyze the behavior of the loop to determine what it's doing.   So let's take a look at performance metrics from 100,000 iterations of these methods on a List<int> of varying sizes filled with random data.  For this test, we fill a target array with 100,000 random integers and then run the exact same pseudo-random targets through both searches.                       List<T> On 100,000 Iterations     Method      Size     Total (ms)  Per Iteration (ms)  % Slower     Any         10       26          0.00046             30.00%     Iteration   10       20          0.00023             -     Any         100      116         0.00201             18.37%     Iteration   100      98          0.00118             -     Any         1000     1058        0.01853             16.78%     Iteration   1000     906         0.01155             -     Any         10,000   10,383      0.18189             17.41%     Iteration   10,000   8843        0.11362             -     Any         100,000  104,004     1.8297              18.27%     Iteration   100,000  87,941      1.13163             -   The LINQ expression is running about 17% slower for average size collections and worse for smaller collections.  Presumably, this is due to the overhead of the state machine used to track the iterators for the yield returns in the LINQ expressions, which seems about right in a tight loop such as this.   So what about other LINQ expressions?  After all, Any() is one of the more trivial ones.  I decided to try the TakeWhile() algorithm using a Count() to get the position stopped like the sample Pete was using in his blog that Resharper refactored for him into LINQ:       // Linq form     public static int GetTargetPosition1(IEnumerable<int> list, int target)     {         return list.TakeWhile(item => item != target).Count();     }       // traditionally iterative form     public static int GetTargetPosition2(IEnumerable<int> list, int target)     {         int count = 0;           foreach (var i in list)         {             if(i == target)             {                 break;             }               ++count;         }           return count;     }   Once again, the LINQ expression is much shorter, easier to read, and should be easier to maintain over time, reducing the cost of technical debt.  So I ran these through the same test data:                       List<T> On 100,000 Iterations     Method      Size     Total (ms)  Per Iteration (ms)  % Slower     TakeWhile   10       41          0.00041             128%     Iteration   10       18          0.00018             -     TakeWhile   100      171         0.00171             88%     Iteration   100      91          0.00091             -     TakeWhile   1000     1604        0.01604             94%     Iteration   1000     825         0.00825             -     TakeWhile   10,000   15765       0.15765             92%     Iteration   10,000   8204        0.08204             -     TakeWhile   100,000  156950      1.5695              92%     Iteration   100,000  81635       0.81635             -     Wow!  I expected some overhead due to the state machines iterators produce, but 90% slower?  That seems a little heavy to me.  So then I thought, well, what if TakeWhile() is not the right tool for the job?  The problem is TakeWhile returns each item for processing using yield return, whereas our for-loop really doesn't care about the item beyond using it as a stop condition to evaluate. So what if that back and forth with the iterator state machine is the problem?  Well, we can quickly create an (albeit ugly) lambda that uses the Any() along with a count in a closure (if a LINQ guru knows a better way PLEASE let me know!), after all , this is more consistent with what we're trying to do, we're trying to find the first occurence of an item and halt once we find it, we just happen to be counting on the way.  This mostly matches Any().       // a new method that uses linq but evaluates the count in a closure.     public static int TakeWhileViaLinq2(IEnumerable<int> list, int target)     {         int count = 0;         list.Any(item =>             {                 if(item == target)                 {                     return true;                 }                   ++count;                 return false;             });         return count;     }     Now how does this one compare?                         List<T> On 100,000 Iterations     Method         Size     Total (ms)  Per Iteration (ms)  % Slower     TakeWhile      10       41          0.00041             128%     Any w/Closure  10       23          0.00023             28%     Iteration      10       18          0.00018             -     TakeWhile      100      171         0.00171             88%     Any w/Closure  100      116         0.00116             27%     Iteration      100      91          0.00091             -     TakeWhile      1000     1604        0.01604             94%     Any w/Closure  1000     1101        0.01101             33%     Iteration      1000     825         0.00825             -     TakeWhile      10,000   15765       0.15765             92%     Any w/Closure  10,000   10802       0.10802             32%     Iteration      10,000   8204        0.08204             -     TakeWhile      100,000  156950      1.5695              92%     Any w/Closure  100,000  108378      1.08378             33%     Iteration      100,000  81635       0.81635             -     Much better!  It seems that the overhead of TakeAny() returning each item and updating the state in the state machine is drastically reduced by using Any() since Any() iterates forward until it finds the value we're looking for -- for the task we're attempting to do.   So the lesson there is, make sure when you use a LINQ expression you're choosing the best expression for the job, because if you're doing more work than you really need, you'll have a slower algorithm.  But this is true of any choice of algorithm or collection in general.     Even with the Any() with the count in the closure it is still about 30% slower, but let's consider that angle carefully.  For a list of 100,000 items, it was the difference between 1.01 ms and 0.82 ms roughly in a List<T>.  That's really not that bad at all in the grand scheme of things.  Even running at 90% slower with TakeWhile(), for the vast majority of my projects, an extra millisecond to save potential errors in the long term and improve maintainability is a small price to pay.  And if your typical list is 1000 items or less we're talking only microseconds worth of difference.   It's like they say: 90% of your performance bottlenecks are in 2% of your code, so over-optimizing almost never pays off.  So personally, I'll take the LINQ expression wherever I can because they will be easier to read and maintain (thus reducing technical debt) and I can rely on Microsoft's development to have coded and unit tested those algorithm fully for me instead of relying on a developer to code the loop logic correctly.   If something's 90% slower, yes, it's worth keeping in mind, but it's really not until you start get magnitudes-of-order slower (10x, 100x, 1000x) that alarm bells should really go off.  And if I ever do need that last millisecond of performance?  Well then I'll optimize JUST THAT problem spot.  To me it's worth it for the readability, speed-to-market, and maintainability.

    Read the article

  • Windows Azure Evolution &ndash; Deploy Web Sites (WAWS Part 3)

    - by Shaun
    This is the sixth post of my Windows Azure Evolution series. After talked a bit about the new caching preview feature in the previous one, let’s back to the Windows Azure Web Sites (WAWS).   Git and GitHub Integration In the third post I introduced the overview functionality of WAWS and demonstrated how to create a WordPress blog through the build-in application gallery. And in the fourth post I covered how to use the TFS service preview to deploy an ASP.NET MVC application to the web site through the TFS integration. WAWS also have the Git integration. I’m not going to talk very detailed about the Git and GitHub integration since there are a bunch of information on the internet you can refer to. To enable the Git just go to the web site item in the developer portal and click the “Set up Git publishing”. After specified the username and password the windows azure platform will establish the Git integration and provide some basic guide. As you can see, you can download the Git binaries, commit the files and then push to the remote repository. Regarding the GitHub, since it’s built on top of Git it should work. Maarten Balliauw have a wonderful post about how to integrate GitHub to Windows Azure Web Site you can find here.   WebMatrix 2 RC WebMatrix is a lightweight web application development tool provided by Microsoft. It utilizes WebDeploy or FTP to deploy the web application to the server. And in WebMatrix 2.0 RC it added the feature to work with Windows Azure. First of all we need to download the latest WebMatrix 2 through the Web Platform Installer 4.0. Just open the WebPI and search “WebMatrix”, or go to its home page download its web installer. Once we have WebMatrix 2, we need to download the publish file of our WAWS. Let’s go to the developer portal and open the web site we want to deploy and download the publish file from the link on the right hand side. This file contains the necessary information of publishing the web site through WebDeploy and FTP, which can be used in WebMatrix, Visual Studio, etc.. Once we have the publish file we can open the WebMatrix, click the Open Site, Remote Site. Then it will bring up a dialog where we can input the information of the remote site. Since we have our publish file already, we can click the “Import publish settings” and select the publish file, then we can see the site information will be populated automatically. Click OK, the WebMatrix will connect to the remote site, which is the WAWS we had deployed already, retrieve the folders and files information. We can open files in WebMatrix and modify. But since WebMatrix is a lightweight web application tool, we cannot update the backend C# code. So in this case, we will modify the frontend home page only. After saved our modification, WebMatrix will compare the files between in local and remote and then it will only upload the modified files to Windows Azure through the connection information in the publish file. Since it only update the files which were changed, this minimized the bandwidth and deployment duration. After few seconds we back to the website and the modification had been applied.   Visual Studio and WebDeploy The publish file we had downloaded can be used not only in WebMatrix but also Visual Studio. As we know in Visual Studio we can publish a web application by clicking the “Publish” item from the project context menu in the solution explorer, and we can specify the WebDeploy, FTP or File System for the publish target. Now we can use the WAWS publish file to let Visual Studio publish the web application to WAWS. Let’s create a new ASP.NET MVC Web Application in Visual Studio 2010 and then click the “Publish” in solution explorer. Once we have the Windows Azure SDK 1.7 installed, it will update the web application publish dialog. So now we can import the publish information from the publish file. Select WebDeploy as the publish method. We can select FTP as well, which is supported by Windows Azure and the FTP information was in the same publish file. In the last step the publish wizard can check the files which will be uploaded to the remote site before the actually publishing. This gives us a chance to review and amend the files. Same as the WebMatrix, Visual Studio will compare the files between local and WAWS and determined which had been changed and need to be published. Finally Visual Studio will publish the web application to windows azure through WebDeploy protocol. Once it finished we can browse our website.   FTP Deployment The publish file we downloaded contains the connection information to our web site via both WebDeploy and FTP. When using WebMatrix and Visual Studio we can select WebDeploy or FTP. WebDeploy method can be used very easily from WebMatrix and Visual Studio, with the file compare feature. But the FTP gives more flexibility. We can use any FTP client to upload files to windows azure regardless which client and OS we are using. Open the publish file in any text editor, we can find the connection information very easily. As you can see the publish file is actually a XML file with WebDeploy and FTP information in plain text attributes. And once we have the FTP URL, username and password, when can connect to the site and upload and download files. For example I opened FileZilla and connected to my WAWS through FTP. Then I can download files I am interested in and modify them on my local disk. Then upload back to windows azure through FileZilla. Then I can see the new page.   Summary In this simple and quick post I introduced vary approaches to deploy our web application to Windows Azure Web Site. It supports TFS integration which I mentioned previously. It also supports Git and GitHub, WebDeploy and FTP as well.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Daylight saving time and Timezone best practices

    - by Oded
    I am hoping to make this question and the answers to it the definitive guide to dealing with daylight saving time, in particular for dealing with the actual change overs. If you have anything to add, please do Many systems are dependent on keeping accurate time, the problem is with changes to time due to daylight savings - moving the clock forward or backwards. For instance, one has business rules in an order taking system that depend on the time of the order - if the clock changes, the rules might not be as clear. How should the time of the order be persisted? There is of course an endless number of scenarios - this one is simply an illustrative one. How have you dealt with the daylight saving issue? What assumptions are part of your solution? (looking for context here) As important, if not more so: What did you try that did not work? Why did it not work? I would be interested in programming, OS, data persistence and other pertinent aspects of the issue. General answers are great, but I would also like to see details especially if they are only available on one platform. Summary of answers and other data: (please add yours) Do: Always persist time according to a unified standard that is not affected by daylight savings. GMT and UTC have been mentioned by different people. Include the local time offset (including DST offset) in stored timestamps. Remember that DST offsets are not always an integer number of hours (e.g. Indian Standard Time is UTC+05:30). If using Java, use JodaTime. - http://joda-time.sourceforge.net/ Create a table TZOffsets with three columns: RegionClassId, StartDateTime, and OffsetMinutes (int, in minutes). See answer Check if your DBMS needs to be shutdown during transition. Business rules should always work on civil time. Internally, keep timestamps in something like civil-time-seconds-from-epoch. See answer Only convert to local times at the last possible moment. Don't: Do not use javascript date and time calculations in web apps unless you ABSOLUTELY have to. Testing: When testing make sure you test countries in the Western and Eastern hemispheres, with both DST in progress and not and a country that does not use DST (6 in total). Reference: Olson database, aka Tz_database - ftp://elsie.nci.nih.gov/pub Sources for Time Zone and DST - http://www.twinsun.com/tz/tz-link.htm ISO format (ISO 8601) - http://en.wikipedia.org/wiki/ISO_8601 Mapping between Olson database and Windows TimeZone Ids, from the Unicode Consortium - http://unicode.org/repos/cldr-tmp/trunk/diff/supplemental/windows_tzid.html TimeZone page on WikiPedia - http://en.wikipedia.org/wiki/Tz_database StackOverflow questions tagged dst - http://stackoverflow.com/questions/tagged/dst StackOverflow questions tagged timezone - http://stackoverflow.com/questions/tagged/timezone Other: Lobby your representative to end the abomination that is DST. We can always hope...

    Read the article

  • How to create a Uri instance parsed with GenericUriParserOptions.DontCompressPath

    - by Andrew Arnott
    When the .NET System.Uri class parses strings it performs some normalization on the input, such as lower-casing the scheme and hostname. It also trims trailing periods from each path segment. This latter feature is fatal to OpenID applications because some OpenIDs (like those issued from Yahoo) include base64 encoded path segments which may end with a period. How can I disable this period-trimming behavior of the Uri class? Registering my own scheme using UriParser.Register with a parser initialized with GenericUriParserOptions.DontCompressPath avoids the period trimming, and some other operations that are also undesirable for OpenID. But I cannot register a new parser for existing schemes like HTTP and HTTPS, which I must do for OpenIDs. Another approach I tried was registering my own new scheme, and programming the custom parser to change the scheme back to the standard HTTP(s) schemes as part of parsing: public class MyUriParser : GenericUriParser { private string actualScheme; public MyUriParser(string actualScheme) : base(GenericUriParserOptions.DontCompressPath) { this.actualScheme = actualScheme.ToLowerInvariant(); } protected override string GetComponents(Uri uri, UriComponents components, UriFormat format) { string result = base.GetComponents(uri, components, format); // Substitute our actual desired scheme in the string if it's in there. if ((components & UriComponents.Scheme) != 0) { string registeredScheme = base.GetComponents(uri, UriComponents.Scheme, format); result = this.actualScheme + result.Substring(registeredScheme.Length); } return result; } } class Program { static void Main(string[] args) { UriParser.Register(new MyUriParser("http"), "httpx", 80); UriParser.Register(new MyUriParser("https"), "httpsx", 443); Uri z = new Uri("httpsx://me.yahoo.com/b./c.#adf"); var req = (HttpWebRequest)WebRequest.Create(z); req.GetResponse(); } } This actually almost works. The Uri instance reports https instead of httpsx everywhere -- except the Uri.Scheme property itself. That's a problem when you pass this Uri instance to the HttpWebRequest to send a request to this address. Apparently it checks the Scheme property and doesn't recognize it as 'https' because it just sends plaintext to the 443 port instead of SSL. I'm happy for any solution that: Preserves trailing periods in path segments in Uri.Path Includes these periods in outgoing HTTP requests. Ideally works with under ASP.NET medium trust (but not absolutely necessary).

    Read the article

  • A New Threat To Web Applications: Connection String Parameter Pollution (CSPP)

    - by eric.maurice
    Hi, this is Shaomin Wang. I am a security analyst in Oracle's Security Alerts Group. My primary responsibility is to evaluate the security vulnerabilities reported externally by security researchers on Oracle Fusion Middleware and to ensure timely resolution through the Critical Patch Update. Today, I am going to talk about a serious type of attack: Connection String Parameter Pollution (CSPP). Earlier this year, at the Black Hat DC 2010 Conference, two Spanish security researchers, Jose Palazon and Chema Alonso, unveiled a new class of security vulnerabilities, which target insecure dynamic connections between web applications and databases. The attack called Connection String Parameter Pollution (CSPP) exploits specifically the semicolon delimited database connection strings that are constructed dynamically based on the user inputs from web applications. CSPP, if carried out successfully, can be used to steal user identities and hijack web credentials. CSPP is a high risk attack because of the relative ease with which it can be carried out (low access complexity) and the potential results it can have (high impact). In today's blog, we are going to first look at what connection strings are and then review the different ways connection string injections can be leveraged by malicious hackers. We will then discuss how CSPP differs from traditional connection string injection, and the measures organizations can take to prevent this kind of attacks. In web applications, a connection string is a set of values that specifies information to connect to backend data repositories, in most cases, databases. The connection string is passed to a provider or driver to initiate a connection. Vendors or manufacturers write their own providers for different databases. Since there are many different providers and each provider has multiple ways to make a connection, there are many different ways to write a connection string. Here are some examples of connection strings from Oracle Data Provider for .Net/ODP.Net: Oracle Data Provider for .Net / ODP.Net; Manufacturer: Oracle; Type: .NET Framework Class Library: - Using TNS Data Source = orcl; User ID = myUsername; Password = myPassword; - Using integrated security Data Source = orcl; Integrated Security = SSPI; - Using the Easy Connect Naming Method Data Source = username/password@//myserver:1521/my.server.com - Specifying Pooling parameters Data Source=myOracleDB; User Id=myUsername; Password=myPassword; Min Pool Size=10; Connection Lifetime=120; Connection Timeout=60; Incr Pool Size=5; Decr Pool Size=2; There are many variations of the connection strings, but the majority of connection strings are key value pairs delimited by semicolons. Attacks on connection strings are not new (see for example, this SANS White Paper on Securing SQL Connection String). Connection strings are vulnerable to injection attacks when dynamic string concatenation is used to build connection strings based on user input. When the user input is not validated or filtered, and malicious text or characters are not properly escaped, an attacker can potentially access sensitive data or resources. For a number of years now, vendors, including Oracle, have created connection string builder class tools to help developers generate valid connection strings and potentially prevent this kind of vulnerability. Unfortunately, not all application developers use these utilities because they are not aware of the danger posed by this kind of attacks. So how are Connection String parameter Pollution (CSPP) attacks different from traditional Connection String Injection attacks? First, let's look at what parameter pollution attacks are. Parameter pollution is a technique, which typically involves appending repeating parameters to the request strings to attack the receiving end. Much of the public attention around parameter pollution was initiated as a result of a presentation on HTTP Parameter Pollution attacks by Stefano Di Paola and Luca Carettoni delivered at the 2009 Appsec OWASP Conference in Poland. In HTTP Parameter Pollution attacks, an attacker submits additional parameters in HTTP GET/POST to a web application, and if these parameters have the same name as an existing parameter, the web application may react in different ways depends on how the web application and web server deal with multiple parameters with the same name. When applied to connections strings, the rule for the majority of database providers is the "last one wins" algorithm. If a KEYWORD=VALUE pair occurs more than once in the connection string, the value associated with the LAST occurrence is used. This opens the door to some serious attacks. By way of example, in a web application, a user enters username and password; a subsequent connection string is generated to connect to the back end database. Data Source = myDataSource; Initial Catalog = db; Integrated Security = no; User ID = myUsername; Password = XXX; In the password field, if the attacker enters "xxx; Integrated Security = true", the connection string becomes, Data Source = myDataSource; Initial Catalog = db; Integrated Security = no; User ID = myUsername; Password = XXX; Intergrated Security = true; Under the "last one wins" principle, the web application will then try to connect to the database using the operating system account under which the application is running to bypass normal authentication. CSPP poses serious risks for unprepared organizations. It can be particularly dangerous if an Enterprise Systems Management web front-end is compromised, because attackers can then gain access to control panels to configure databases, systems accounts, etc. Fortunately, organizations can take steps to prevent this kind of attacks. CSPP falls into the Injection category of attacks like Cross Site Scripting or SQL Injection, which are made possible when inputs from users are not properly escaped or sanitized. Escaping is a technique used to ensure that characters (mostly from user inputs) are treated as data, not as characters, that is relevant to the interpreter's parser. Software developers need to become aware of the danger of these attacks and learn about the defenses mechanism they need to introduce in their code. As well, software vendors need to provide templates or classes to facilitate coding and eliminate developers' guesswork for protecting against such vulnerabilities. Oracle has introduced the OracleConnectionStringBuilder class in Oracle Data Provider for .NET. Using this class, developers can employ a configuration file to provide the connection string and/or dynamically set the values through key/value pairs. It makes creating connection strings less error-prone and easier to manager, and ultimately using the OracleConnectionStringBuilder class provides better security against injection into connection strings. For More Information: - The OracleConnectionStringBuilder is located at http://download.oracle.com/docs/cd/B28359_01/win.111/b28375/OracleConnectionStringBuilderClass.htm - Oracle has developed a publicly available course on preventing SQL Injections. The Server Technologies Curriculum course "Defending Against SQL Injection Attacks!" is located at http://st-curriculum.oracle.com/tutorial/SQLInjection/index.htm - The OWASP web site also provides a number of useful resources. It is located at http://www.owasp.org/index.php/Main_Page

    Read the article

  • Optimizing multiple dispatch notification algorithm in C#?

    - by Robert Fraser
    Sorry about the title, I couldn't think of a better way to describe the problem. Basically, I'm trying to implement a collision system in a game. I want to be able to register a "collision handler" that handles any collision of two objects (given in either order) that can be cast to particular types. So if Player : Ship : Entity and Laser : Particle : Entity, and handlers for (Ship, Particle) and (Laser, Entity) are registered than for a collision of (Laser, Player), both handlers should be notified, with the arguments in the correct order, and a collision of (Laser, Laser) should notify only the second handler. A code snippet says a thousand words, so here's what I'm doing right now (naieve method): public IObservable<Collision<T1, T2>> onCollisionsOf<T1, T2>() where T1 : Entity where T2 : Entity { Type t1 = typeof(T1); Type t2 = typeof(T2); Subject<Collision<T1, T2>> obs = new Subject<Collision<T1, T2>>(); _onCollisionInternal += delegate(Entity obj1, Entity obj2) { if (t1.IsAssignableFrom(obj1.GetType()) && t2.IsAssignableFrom(obj2.GetType())) obs.OnNext(new Collision<T1, T2>((T1) obj1, (T2) obj2)); else if (t1.IsAssignableFrom(obj2.GetType()) && t2.IsAssignableFrom(obj1.GetType())) obs.OnNext(new Collision<T1, T2>((T1) obj2, (T2) obj1)); }; return obs; } However, this method is quite slow (measurable; I lost ~2 FPS after implementing this), so I'm looking for a way to shave a couple cycles/allocation off this. I thought about (as in, spent an hour implementing then slammed my head against a wall for being such an idiot) a method that put the types in an order based on their hash code, then put them into a dictionary, with each entry being a linked list of handlers for pairs of that type with a boolean indication whether the handler wanted the order of arguments reversed. Unfortunately, this doesn't work for derived types, since if a derived type is passed in, it won't notify a subscriber for the base type. Can anyone think of a way better than checking every type pair (twice) to see if it matches? Thanks, Robert

    Read the article

  • mciSendString cannot save to directory path

    - by robUK
    Hello, VS C# 2008 SP1 I have a created a small application that records and plays audio. However, my application needs to save the wave file to the application data directory on the users computer. The mciSendString takes a C style string as a parameter and has to be in 8.3 format. However, my problem is I can't get it to save. And what is strange is sometime it does and sometimes it doesn't. Howver, most of the time is failes. However, if I save directly to the C drive it works first time everything. I have used 3 different methods that I have coded below. The error number that I get when it fails is 286."The file was not saved. Make sure your system has sufficient disk space or has an intact network connection" Many thanks for any suggestins, [DllImport("winmm.dll",CharSet=CharSet.Auto)] private static extern uint mciSendString([MarshalAs(UnmanagedType.LPTStr)] string command, StringBuilder returnValue, int returnLength, IntPtr winHandle); [DllImport("winmm.dll", CharSet = CharSet.Auto)] private static extern int mciGetErrorString(uint errorCode, StringBuilder errorText, int errorTextSize); [DllImport("Kernel32.dll", CharSet=CharSet.Auto)] private static extern int GetShortPathName([MarshalAs(UnmanagedType.LPTStr)] string longPath, [MarshalAs(UnmanagedType.LPTStr)] StringBuilder shortPath, int length); // Stop recording private void StopRecording() { // Save recorded voice string shortPath = this.shortPathName(); string formatShortPath = string.Format("save recsound \"{0}\"", shortPath); uint result = 0; StringBuilder errorTest = new StringBuilder(256); // C:\DOCUME~1\Steve\APPLIC~1\Test.wav // Fails result = mciSendString(string.Format("{0}", formatShortPath), null, 0, IntPtr.Zero); mciGetErrorString(result, errorTest, errorTest.Length); // command line convention - fails result = mciSendString("save recsound \"C:\\DOCUME~1\\Steve\\APPLIC~1\\Test.wav\"", null, 0, IntPtr.Zero); mciGetErrorString(result, errorTest, errorTest.Length); // 8.3 short format - fails result = mciSendString(@"save recsound C:\DOCUME~1\Steve\APPLIC~1\Test.wav", null, 0, IntPtr.Zero); mciGetErrorString(result, errorTest, errorTest.Length); // Save to C drive works everytime. result = mciSendString(@"save recsound C:\Test.wav", null, 0, IntPtr.Zero); mciGetErrorString(result, errorTest, errorTest.Length); mciSendString("close recsound ", null, 0, IntPtr.Zero); } // Get the short path name so that the mciSendString can save the recorded wave file private string shortPathName() { string shortPath = string.Empty; long length = 0; StringBuilder buffer = new StringBuilder(256); // Get the length of the path length = GetShortPathName(this.saveRecordingPath, buffer, 256); shortPath = buffer.ToString(); return shortPath; }

    Read the article

  • Workshops, online content show how Oracle infuses simplicity, mobility, extensibility into user experience

    - by mvaughan
    By Kathy Miedema & Misha Vaughan, Oracle Applications User Experience Oracle has made a huge investment into the user experience of its many different software product families, and recent releases showcase big changes and features that aim to promote end user engagement and efficiency by streamlining navigation and simplifying the user interface. But making Oracle’s enterprise software great-looking and usable doesn’t stop when Oracle products go out the door. The Applications User Experience (UX) team recognizes that our customers may need to customize software to fit their work processes. And that’s why we provide tools such as user experience design patterns to help you maintain the Oracle user experience as you tailor your application to fit your business needs. Often, however, customers may need some context around user experience. How has the Oracle user experience been designed and constructed? Why is a good user experience important for users? How does understanding what goes into the user experience benefit the people who purchase the software for users? There’s a short answer to these questions, and you can read about it on Usable Apps. But truly understanding Oracle’s investment and seeing how it applies across product families occasionally requires a deeper dive into the Oracle user experience, especially if you’re an influencer or decision-maker about Oracle products. To help frame these decisions, the Communications & Outreach team has developed several targeted workshops that explore what Oracle means when it talks about user experience, and provides a roadmap into where the Oracle user experience is going. These workshops require non-disclosure agreements, and have been delivered to Oracle sales folks, Oracle partners, Oracle ACE Directors and ACEs, and a few customers. Some of these audience members have been developers or have a technical background; just as many did not. Here’s a breakdown of the kind of training you can get around the Oracle user experience from the OAUX Communications & Outreach team.For Partners: George Papazzian, Principal, Naviscent with Joyce Ohgi, Oracle Oracle Fusion Applications HCM Pre-Sales Seminar:  In concert with Worldwide Alliances  and  Channels under Applications Partner Enablement Director Jonathan Vinoskey’s guidance, the Applications User Experience team delivers a two-day workshop.  Day one focuses on Oracle Fusion Applications HCM and pre-sales strategy, and Day two focuses on positioning and leveraging Oracle’s investment in the Oracle Fusion Applications user experience.  The next workshops will occur on the following dates: December 4-5, 2013 @ Manchester, UK January 29-30, 2014 @ Reston, Virginia February 2014 @ Guadalajara, Mexico (email: Shannon Whiteman) March 11-12, 2014 @ Dubai, United Arab Emirates April 1-2, 2014 @ Chicago, Illinois Partner Advisory Board: A two-day board meeting in the U.S. and U.K. to discuss four main user experience areas for Oracle Fusion Applications: simplicity, visualization & analytics, mobility, & futures. This event is limited to Oracle Diamond Partners, UX bloggers, and key UX influencers and requires legal documentation.  We will be talking about the Oracle applications UX strategy and roadmap. Partner Implementation Training on User Interface: How to Build Great-Looking, Usable Apps:  In this two-day, hands-on workshop built around Oracle’s Application Development Framework, learn how to build desktop and mobile user interfaces and mobile user interfaces based on Oracle’s experience with Fusion Applications. This workshop is for partners with a technology background who are looking for ways to tailor Fusion Applications using ADF, or have built their own custom solutions using ADF. It includes an introduction to UX design patterns and provides tools to build usability-tested UX designs. Nov 5-6, 2013 @ Redwood Shores, CA, USA January 28-29th, 2014 @ Reston, Virginia, USA February 25-26, 2014 @ Guadalajara, Mexico March 9-10, 2014 @ Dubai, United Arab Emirates To register, contact [email protected] Simplified UI Customization & Extensibility:  Pilot workshop:  We will be reviewing the proposed content for communicating the user experience tool kit available with the next release of Oracle Fusion Applications.  Our core focus will be on what toolkit components our system implementors and independent software vendors will need to respond to customer demand, whether they are extending Fusion Applications, or building custom applications, that will need to leverage the simplified UI. Dec 11th, 2013 @ Reading, UK For information: contact [email protected] Private lab tour and demos: Interested in seeing what’s going on in the Apps UX Labs?  If you are headed to the San Francisco Bay Area, let us know. We can arrange a spin through our usability labs at headquarters. OAUX Expo: This open-house forum gives partners a look at what the UX team is working on, and showcases the next-generation user experiences in a demo environment where attendees can see and touch the applications. UX Direct: Use the same methods that Oracle uses to develop its own user experiences. We help you define your users and their needs, and then provide direction on how to tailor the best user experience you can for them. For CustomersAngela Johnston, Gozel Aamoth, Teena Singh, and Yen Chan, Oracle Lab tours: See demos of soon-to-be-released products, and take a spin on usability research equipment such as our eye-tracker. Watch this video to get an idea of what you’ll see. Get our newsletter: Learn about newly released products and see where you can meet us at user group conferences. Participate in a feedback session: Join a focus group or customer feedback session to get an early look at user experience designs for the next generation of software, and provide your thoughts on how well it will work. Join the OUAB: The Oracle Usability Advisory Board meets several times a year to discuss trends in the workforce and provide direction on user experience designs. UX Direct: Use the same methods that Oracle uses to develop its own user experiences. We help you define your users and their needs, and then provide direction on how to tailor the best user experience you can for them. For Developers (customers, partners, and consultants): Plinio Arbizu, SP Solutions, Richard Bingham, Oracle, Balaji Kamepalli, EiSTechnoogies, Praveen Pillalamarri, EiSTechnologies How to Build Great-Looking, Usable Apps: This workshop is for attendees with a strong technology background who are looking for ways to tailor customer software using ADF. It includes an introduction to UX design patterns and provides tools to build usability-tested UX designs.  See above for dates and times. UX design patterns web site: Cut the length of your project down by months. Use these patterns to build out the task flow you need to develop for your users. The patterns have already been usability-tested and represent the best practices that the Oracle UX research team has found in its studies. UX Direct: Use the same methods that Oracle uses to develop its own user experiences. We help you define your users and their needs, and then provide direction on how to tailor the best user experience you can for them. For Oracle Sales Mike Klein, Jeremy Ashley, Brent White, Oracle Contact your local sales person for more information about the Oracle user experience and the training available from the Applications User Experience Communications & Outreach team. See customer-friendly user experience collateral ranging from the new simplified UI in Oracle Fusion Applications Release 7, to E-Business Suite user experience highlights, to Siebel, PeopleSoft, and JD Edwards user experience highlights.   Receive access to the same pre-sales and implementation training we provide to partners. For Oracle Sales only: Oracle-only training on the Oracle Fusion Applications UX Innovation Sales Kit.

    Read the article

< Previous Page | 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808  | Next Page >