Search Results

Search found 11400 results on 456 pages for 'automated testing'.

Page 339/456 | < Previous Page | 335 336 337 338 339 340 341 342 343 344 345 346  | Next Page >

  • Fan not working on thinkpad L430, laptop overheating

    - by Dirk B.
    I'm having problems controlling the fan of my Lenovo Thinkpad L430. The fan doesn't start. Without any fan control installed the fan just doesn't run. If I run stress, it does run a little, but it's nowhere near the speed it should be. After a while, the laptop just overheats and stops. I Tried to install tp-fancontrol, and enabled thinkpad_acpi fancontrol=1, but to no avail. If I try to set the fan speed manually, it doesn't start up. In windows, there's a program called TPFanControl. It turns out that this laptop uses a different scheme to control the fan than other thinkpads. The level runs from 0 to 255, and max = 0 and min=255. Now I'm looking for a fan control program that works for linux. Does anyone know if it actually exists? Anyone with any experience on fan control on a L430? Update: sudo pwmconfig gives the following output: # pwmconfig revision 5857 (2010-08-22) This program will search your sensors for pulse width modulation (pwm) controls, and test each one to see if it controls a fan on your motherboard. Note that many motherboards do not have pwm circuitry installed, even if your sensor chip supports pwm. We will attempt to briefly stop each fan using the pwm controls. The program will attempt to restore each fan to full speed after testing. However, it is ** very important ** that you physically verify that the fans have been to full speed after the program has completed. Found the following devices: hwmon0 is acpitz hwmon1/device is coretemp hwmon2/device is thinkpad Found the following PWM controls: hwmon2/device/pwm1 hwmon2/device/pwm1 is currently setup for automatic speed control. In general, automatic mode is preferred over manual mode, as it is more efficient and it reacts faster. Are you sure that you want to setup this output for manual control? (n) y Giving the fans some time to reach full speed... Found the following fan sensors: hwmon2/device/fan1_input current speed: 0 ... skipping! There are no working fan sensors, all readings are 0. Make sure you have a 3-wire fan connected. You may also need to increase the fan divisors. See doc/fan-divisors for more information. regards, Dirk

    Read the article

  • Adopting DBVCS

    - by Wes McClure
    Identify early adopters Pick a small project with a small(ish) team.  This can be a legacy application or a green-field application. Strive to find a team of early adopters that will be eager to try something new. Get the team on board! Research Research the tool(s) that you want to use.  Some tools provide all of the features you would need while some only provide a slice of the pie.  DBVCS requires the ability to manage a set of change scripts that update a database from one version to the next.  Ideally a tool can track database versions and automatically apply updates.  The change script generation process can be manual, but having diff tools available to automatically generate it can really reduce the overhead to adoption.  Finally, an automated tool to generate a script file per database object is an added bonus as your version control system can quickly identify what was changed in a commit (add/del/modify), just like with code changes. Don’t settle on just one tool, identify several.  Then work with the team to evaluate the tools.  Have the team do some tests of the following scenarios with each tool: Baseline an existing database: can the migration tool work with legacy databases?  Caution: most migration platforms do not support baselines or have poor support, especially the fad of fluent APIs. Add/drop tables Add/drop procedures/functions/views Alter tables (rename columns, add columns, remove columns) Massage data – migrations sometimes involve changing data types that cannot be implicitly casted and require you to decide how the data is explicitly cast to the new type.  This is a requirement for a migrations platform.  Think about a case where you might want to combine fields, or move a field from one table to another, you wouldn’t want to lose the data. Run the tool via the command line.  If you cannot automate the tool in Continuous Integration what is the point? Create a copy of a database on demand. Backup/restore databases locally. Let the team give feedback and decide together, what tool they would like to try out. My recommendation at this point would be to include TSqlMigrations and RoundHouse as SQL based migration platforms.  In general I would recommend staying away from the fluent platforms as they often lack baseline capabilities and add overhead to learn a new API when SQL is already a very well known DSL.  Code migrations often get messy with procedures/views/functions as these have to be created with SQL and aren’t cross platform anyways.  IMO stick to SQL based migrations. Reconciling Production If your project is a legacy application, you will need to reconcile the current state of production with your development databases.  Find changes in production and bring them down to development, even if they are old and need to be removed.  Once complete, produce a baseline of either dev or prod as they are now in sync.  Commit this to your VCS of choice. Add whatever schema changes tracking mechanism your tool requires to your development database.  This often requires adding a table to track the schema version of that database.  Your tool should support doing this for you.  You can add this table to production when you do your next release. Script out any changes currently in dev.  Remove production artifacts that you brought down during reconciliation.  Add change scripts for any outstanding changes in dev since the last production release.  Commit these to your repository.   Say No to Shared Dev DBs Simply put, you wouldn’t dream of sharing a code checkout, why would you share a development database?  If you have a shared dev database, back it up, distribute the backups and take the shared version offline (including the dev db server once all projects are using DB VCS).  Doing DB VCS with a shared database is bound to cause problems as people won’t be able to easily script out their own changes from those that others are working on.   First prod release Copy prod to your beta/testing environment.  Add the schema changes table (or mechanism) and do a test run of your changes.  If successful you can schedule this to be run on production.   Evaluation After your first release, evaluate the pain points of the process.  Try to find tools or modifications to existing tools to help fix them.  Don’t leave stones unturned, iteratively evolve your tools and practices to make the process as seamless as possible.  This is why I suggest open source alternatives.  Nothing is set in stone, a good example was adding transactional support to TSqlMigrations.  We ran into situations where an update would break a database, so I added a feature to do transactional updates and rollback on errors!  Another good example is generating change scripts.  We have been manually making these for months now.  I found an open source project called Open DB Diff and integrated this with TSqlMigrations.  These were things we just accepted at the time when we began adopting our tool set.  Once we became comfortable with the base functionality, it was time to start automating more of the process.  Just like anything else with development, never be afraid to try to find tools to make your job easier!   Enjoy -Wes

    Read the article

  • The Interaction between Three-Tier Client/Server Model and Three-Tier Application Architecture Model

    The three-tier client/server model is a network architectural approach currently used in modern networking. This approach divides a network in to three distinct components. Three-Tier Client/Server Model Components Client Component Server Component Database Component The Client Component of the network typically represents any device on the network. A basic example of this would be computer or another network/web enabled devices that are connected to a network. Network clients request resources on the network, and are usually equipped with a user interface for the presentation of the data returned from the Server Component. This process is done through the use of various software clients, and example of this can be seen through the use of a web browser client. The web browser request information from the Server Component located on the network and then renders the results for the user to process. The Server Components of the network return data based on specific client request back to the requesting client.  Server Components also inherit the attributes of a Client Component in that they are a device on the network and that they can also request information from other Server Components. However what differentiates a Client Component from a Server Component is that a Server Component response to requests from devices on the network. An example of a Server Component can be seen in a web server. A web server listens for new requests and then interprets the request, processes the web pages, and then returns the processed data back to the web browser client so that it may render the data for the user to interpret. The Database Component of the network returns unprocessed data from databases or other resources. This component also inherits attributes from the Server Component in that it is a device on a network, it can request information from other server components and database components, and it also listens for new requests so that it can return data when needed. The three-tier client/server model is very similar to the three-tier application architecture model, and in fact the layers can be mapped to one another. Three-Tier Application Architecture Model Presentation Layer/Logic Business Layer/Logic Data Layer/Logic The Presentation Layer including its underlying logic is very similar to the Client Component of the three-tiered model. The Presentation Layer focuses on interpreting the data returned by the Business Layer as well as presents the data back to the user.  Both the Presentation Layer and the Client Component focus primarily on the user and their experience. This allows for segments of the Business Layer to be distributable and interchangeable because the Presentation Layer is not directly integrated in with Business Layer. The Presentation Layer does not care where the data comes from as long as it is in the proper format. This allows for the Presentation Layer and Business Layer to be stored on one or more different servers so that it can provide a higher availability to clients requesting data. A good example of this is a web site that uses load balancing. When a web site decides to take on the task of load balancing they must obtain a network device that sits in front of a one or machines in order to distribute the request across multiple servers. When a user comes in through the load balanced device they are redirected to a specific server based on a few factors. Common Load Balancing Factors Current Server Availability Current Server Response Time Current Server Priority The Business Layer and corresponding logic are business rules applied to data prior to it being sent to the Presentation Layer. These rules are used to manipulate the data coming from the Data Access Layer, in addition to validating any data prior to being stored in the Data Access Layer. A good example of this would be when a user is trying to create multiple accounts under one email address. The Business Layer logic can prevent duplicate accounts by enforcing a unique email for every new account before the data is even stored in the Data Access Layer. The Server Component can be directly tied to this layer in that the server typically stores and process the Business Layer before it is returned to the end-user via the Presentation Layer. In addition the Server Component can also run automated process through the Business Layer on the data in the Data Access Layer so that additional business analysis can be derived from the data that has been already collected. The Data Layer and its logic are responsible for storing information so that it can be easily retrieved. Typical in most modern applications data is stored in a database management system however data can also be in the form of files stored on a file server. In addition a database can take on one of several forms. Common Database Formats XML File Pipe Delimited File Tab Delimited File Comma Delimited File (CSV) Plain Text File Microsoft Access Microsoft SQL Server MySql Oracle Sybase The Database component of the Networking model can be directly tied to the Data Layer because this is where the Data Layer obtains the data to return back the Business Layer. The Database Component basically allows for a place on the network to store data for future use. This enables applications to save data when they can and then quickly recall the saved data as needed so that the application does not have to worry about storing the data in memory. This prevents overhead that could be created when an application must retain all data in memory. As you can see the Three-Tier Client/Server Networking Model and the Three-Tiered Application Architecture Model rely very heavily on one another to function especially if different aspects of an application are distributed across an entire network. The use of various servers and database servers are wonderful when an application has a need to distribute work across the network. Network Components and Application Layers Interaction Database components will store all data needed for the Data Access Layer to manipulate and return to the Business Layer Server Component executes the Business Layer that manipulates data so that it can be returned to the Presentation Layer Client Component hosts the Presentation Layer that  interprets the data and present it to the user

    Read the article

  • Comments on Comments

    - by Joe Mayo
    I almost tweeted a reply to Capar Kleijne's question about comments on Twitter, but realized that my opinion exceeded 140 characters. The following is based upon my experience with extremes and approaches that I find useful in code comments. There are a couple extremes that I've seen and reasons why people go the distance in each approach. The most common extreme is no comments in the code at all.  A few bad reasons why this happens is because a developer is in a hurry, sloppy, or is interested in job preservation. The unfortunate result is that the code is difficult to understand and hard to maintain. The drawbacks to no comments in code are a primary reason why teachers drill the need for commenting code into our heads.  This viewpoint assumes the lack of comments are bad because the code is bad, but there is another reason for not commenting that is gaining more popularity. I've heard/and read that code should be self documenting. Following this thought pattern, if code is well written with meaningful names, there should not be a reason for comments.  An addendum to this argument is that comments are often neglected and get out-of-date, but the code is what is kept up-to-date. Presumably, if code contained very good naming, it would be easy to maintain.  This is a noble perspective and I like the practice of meaningful naming of identifiers. However, I think it's also an extreme approach that doesn't cover important cases.  i.e. If an identifier is named badly (subjective differences in opinion) or not changed appropriately during maintenance, then the badly named identifier is no more useful than a stale comment. These were the two no-comment extremes, so let's look at the too many comments extreme. On a regular basis, I'll see cases where the code is over-commented; not nearly as often as the no-comment scenarios, but still prevalent.  These are examples of where every single line in the code is commented.  These comments make the code harder to read because they get in the way of the algorithm.  In most cases, the comments parrot what each line of code does.  If a developer understands the language, then most statements are immediately intuitive.  i.e. what use is it to say that I'm assigning foo to bar when it's clear what the code is doing. I think that over-commenting code is a waste of time that slows down initial development and maintenance.  Understandably, the developer's intentions are admirable because they've had it beaten into their heads that they must comment. However, I think it's an extreme and prefer a more moderate approach. I don't think the extremes do justice to code because each can make maintenance harder.  No comments on bad code is obviously a problem, but the other two extremes are subtle and require qualification to address properly. The problem I see with the code-as-documentation approach is that it doesn't lift the developer out of the algorithm to identify dependencies, intentions, and hacks. Any developer can read code and follow an algorithm, but they still need to know where it fits into the big picture of the application. Because of indirections with language features like interfaces, delegates, and virtual members, code can become complex.  Occasionally, it's useful to point out a nuance or reason why a piece of code is there. i.e. If you've building an app that communicates via HTTP, you'll have certain headers to include for the endpoint, and it could be useful to point out why the code for setting those header values is there and how they affect the application. An argument against this could be that you should extract that code into a separate method with a meaningful name to describe the scenario.  My problem with such an approach would be that your code base becomes even more difficult to navigate and work with because you have all of this extra code just to make the code more meaningful. My opinion is that a simple and well-stated comment stating the reasons and intention for the code is more natural and convenient to the initial developer and maintainer.  I just don't agree with the approach of going out of the way to avoid making a comment.  I'm also concerned that some developers would take this approach as an excuse to not comment their bad code. Another area where I like comments is on documentation comments.  Java has it and so does C# and VB.  It's convenient because we can build automated tools that extract these comments.  These extracted comments are often much better than no documentation at all.  The "go read the code" answer always doesn't fulfill the need for a quick summary of an API. To summarize, I think that the extremes of no comments and too many comments are less than desirable approaches. I prefer documentation comments to explain each class and member (API level) and code comments as necessary to supplement well-written code. Joe

    Read the article

  • Programmatically use a server as the Build Server for multiple Project Collections

    Important: With this post you create an unsupported scenario by Microsoft. It will break your support for this server with Microsoft. So handle with care. I am the administrator an a TFS environment with a lot of Project Collections. In the supported configuration of Microsoft 2010 you need one Build Controller per Project Collection, and it is not supported to have multiple Build Controllers installed. Jim Lamb created a post how you can modify your system to change this behaviour. But since I have so many Project Collections, I automated this with the API of TFS. When you install a new build server via the UI, you do the following steps Register the build service (with this you hook the windows server into the build server environment) Add a new build controller Add a new build agent So in pseudo code, the code would look like foreach (projectCollection in GetAllProjectCollections) {       CreateNewWindowsService();       RegisterService();       AddNewController();       AddNewAgent(); } The following code fragements show you the most important parts of the method implementations. Attached is the full project. CreateNewWindowsService We create a new windows service with the SC command via the Diagnostics.Process class:             var pi = new ProcessStartInfo("sc.exe")                         {                             Arguments =                                 string.Format(                                     "create \"{0}\" start= auto binpath= \"C:\\Program Files\\Microsoft Team Foundation Server 2010\\Tools\\TfsBuildServiceHost.exe              /NamedInstance:{0}\" DisplayName= \"Visual Studio Team Foundation Build Service Host ({1})\"",                                     serviceHostName, tpcName)                         };            Process.Start(pi);             pi.Arguments = string.Format("failure {0} reset= 86400 actions= restart/60000", serviceHostName);            Process.Start(pi); RegisterService The trick in this method is that we set the NamedInstance static property. This property is Internal, so we need to set it through reflection. To get information on these you need nice Microsoft friends and the .Net reflector .             // Indicate which build service host instance we are using            typeof(BuildServiceHostUtilities).Assembly.GetType("Microsoft.TeamFoundation.Build.Config.BuildServiceHostProcess").InvokeMember("NamedInstance",              System.Reflection.BindingFlags.NonPublic | System.Reflection.BindingFlags.SetProperty | System.Reflection.BindingFlags.Static, null, null, new object[] { serviceName });             // Create the build service host            serviceHost = buildServer.CreateBuildServiceHost(serviceName, endPoint);            serviceHost.Save();             // Register the build service host            BuildServiceHostUtilities.Register(serviceHost, user, password); AddNewController and AddNewAgent Once you have the BuildServerHost, the rest is pretty straightforward. There are methods on the BuildServerHost to modify the controllers and the agents                 controller = serviceHost.CreateBuildController(controllerName);                 agent = controller.ServiceHost.CreateBuildAgent(agentName, buildDirectory, controller);                controller.AddBuildAgent(agent); You have now seen the highlights of the application. If you need it and want to have sample information when you work in this area, download the app TFS2010_RegisterBuildServerToTPCs

    Read the article

  • Coherence Based WebLogic Server Session Management

    - by [email protected]
    Specifications Supported Configurations WebLogic Server 10.3.2( or 10.3.1 ) Coherence 3.5.2/463 If you use other verion above, then please check the following matrix:   WebLogic Server 9.2 MP1 Weblogic Server 10.3 WebLogic Smart Update Patch ID: AJQB Patch ID: 6W2W Minimum Coherence Release Level/MetaLink Patch ID 3.4.2 Patch 2-Patch ID:8429415 3.4.2 Patch6-Patch ID:11399293 Environment Variables %COHERENCE_HOME%: coherence installation directory %DOMAIN_HOME%: weblogic domain foler. Instructions We Will create to weblogic domains: domain_a, domain_b. To configure those domains with coherence-based session management . Then the changings of session variable value in one domain will propagate to another domain. Main Steps WebLogic Server create domain_a The process is ignored copy %COHERENCE_HOME%\lib\coherence.jar to %DOMAIN_HOME%\lib startup domain deploy %COHERENCE_HOME%\lib\coherence-web-spi.war as a Shared Library repeat step 1~4 at domain_b Coherence duplicate %COHERENCE_HOME%\bin\cache-server.cmd at the same folder and rename it to web-cache-server.cmd modify web-cache-server.cmd java -server -Xms512m -Xmx512m -cp %coherence_home%/lib/coherence.jar;%coherence_home%/lib/coherence-web-spi.war -Dtangosol.coherence.management.remote=true -Dtangosol.coherence.cacheconfig=WEB-INF/classes/session-cache-config.xml -Dtangosol.coherence.session.localstorage=true com.tangosol.net.DefaultCacheServer startup web-cache-server.cmd Testing develop a web app  with OEPE or JDeveloper and implment functions: changing, viewing, listing  session variables. ( or download sample codes here ) modify weblogic.xml with following content: <?xml version="1.0" encoding="UTF-8"?> <wls:weblogic-web-app xmlns:wls=http://xmlns.oracle.com/weblogic/weblogic-web-app xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd http://xmlns.oracle.com/weblogic/weblogic-web-app http://xmlns.oracle.com/weblogic/weblogic-web-app/1.0/weblogic-web-app.xsd"> <wls:weblogic-version>10.3.2</wls:weblogic-version> <wls:context-root>CoherenceWeb</wls:context-root> <wls:library-ref> <wls:library-name>coherence-web-spi</wls:library-name> <wls:specification-version>1.0.0.0</wls:specification-version> <wls:exact-match>true</wls:exact-match> </wls:library-ref> </wls:weblogic-web-app> deploy the web app to domain_a and domain_b change session varaible vlaue at domain_a and check whethe if changed at domain_b References Using Oracle Coherence*Web 3.4.2 with Oracle WebLogic Server 10gR3 Oracle Coherence*Web 3.4.2 with Oracle WebLogic Server 10gR3

    Read the article

  • Packing a DBF

    - by Tom Hines
    I thought my days of dealing with DBFs as a "production data" source were over, but HA (no such luck). I recently had to retrieve, modify and replace some data that needed to be delivered in a DBF file. Everything was fine until I realized / remembered the DBF driver does not ACTUALLY delete records from the data source -- it only marks them for deletion.  You are responsible for handling the "chaff" either by using a utility to remove deleted records or by simply ignoring them.  If imported into Excel, the marked-deleted records are ignored, but the file size will reflect the extra content.  After several rounds of testing CRUD, the output DBF was huge. So, I went hunting for a method to "Pack" the records (removing deleted ones and resizing the DBF file) and eventually ran across the FOXPRO driver at ( http://msdn.microsoft.com/en-us/vfoxpro/bb190233.aspx ).  Once installed, I changed the DSN in the code to the new one I created in the ODBC Administrator and ran some tests.  Using MSQuery, I simply tested the raw SQL command Pack {tablename} and it WORKED! One really neat thing is the PACK command is used like regular SQL instructions; "Pack {tablename}" is all that is needed. It is necessary, however, to close all connections to the database (and re-open) before issuing the PACK command or you will get the "File is in use" error.    Here is some C# code for a Pack method.         /// <summary>       /// Pack the DBF removing all deleted records       /// </summary>       /// <param name="strTableName">The table to pack</param>       /// <param name="strError">output of any errors</param>       /// <returns>bool (true if no errors)</returns>       public static bool Pack(string strTableName, ref string strError)       {          bool blnRetVal = true;          try          {             OdbcConnectionStringBuilder csbOdbc = new OdbcConnectionStringBuilder()             {                Dsn = "PSAP_FOX_DBF"             };             string strSQL = "pack " + strTableName;             using (OdbcConnection connOdbc = new OdbcConnection(csbOdbc.ToString()))             {                connOdbc.Open();                OdbcCommand cmdOdbc = new OdbcCommand(strSQL, connOdbc);                cmdOdbc.ExecuteNonQuery();                connOdbc.Close();             }          }          catch (Exception exc)          {             blnRetVal = false;             strError = exc.Message;          }          return blnRetVal;       }

    Read the article

  • Extending NerdDinner: Adding Geolocated Flair

    - by Jon Galloway
    NerdDinner is a website with the audacious goal of “Organizing the world’s nerds and helping them eat in packs.” Because nerds aren’t likely to socialize with others unless a website tells them to do it. Scott Hanselman showed off a lot of the cool features we’ve added to NerdDinner lately during his popular talk at MIX10, Beyond File | New Company: From Cheesy Sample to Social Platform. Did you miss it? Go ahead and watch it, I’ll wait. One of the features we wanted to add was flair. You know about flair, right? It’s a way to let folks who like your site show it off in their own site. For example, here’s my StackOverflow flair: Great! So how could we add some of this flair stuff to NerdDinner? What do we want to show? If we’re going to encourage our users to give up a bit of their beautiful website to show off a bit of ours, we need to think about what they’ll want to show. For instance, my StackOverflow flair is all about me, not StackOverflow. So how will this apply to NerdDinner? Since NerdDinner is all about organizing local dinners, in order for the flair to be useful it needs to make sense for the person viewing the web page. If someone visits from Egypt visits my blog, they should see information about NerdDinners in Egypt. That’s geolocation – localizing site content based on where the browser’s sitting, and it makes sense for flair as well as entire websites. So we’ll set up a simple little callout that prompts them to host a dinner in their area: Hopefully our flair works and there is a dinner near your viewers, so they’ll see another view which lists upcoming dinners near them: The Geolocation Part Generally website geolocation is done by mapping the requestor’s IP address to a geographic area. It’s not an exact science, but I’ve always found it to be pretty accurate. There are (at least) three ways to handle it: You pay somebody like MaxMind for a database (with regular updates) that sits on your server, and you use their API to do lookups. I used this on a pretty big project a few years ago and it worked well. You use HTML 5 Geolocation API or Google Gears or some other browser based solution. I think those are cool (I use Google Gears a lot), but they’re both in flux right now and I don’t think either has a wide enough of an install base yet to rely on them. You might want to, but I’ve heard you do all kinds of crazy stuff, and sometimes it gets you in trouble. I don’t mean talk out of line, but we all laugh behind your back a bit. But, hey, it’s up to you. It’s your flair or whatever. There are some free webservices out there that will take an IP address and give you location information. Easy, and works for everyone. That’s what we’re doing. I looked at a few different services and settled on IPInfoDB. It’s free, has a great API, and even returns JSON, which is handy for Javascript use. The IP query is pretty simple. We hit a URL like this: http://ipinfodb.com/ip_query.php?ip=74.125.45.100&timezone=false … and we get an XML response back like this… <?xml version="1.0" encoding="UTF-8"?> <Response> <Ip>74.125.45.100</Ip> <Status>OK</Status> <CountryCode>US</CountryCode> <CountryName>United States</CountryName> <RegionCode>06</RegionCode> <RegionName>California</RegionName> <City>Mountain View</City> <ZipPostalCode>94043</ZipPostalCode> <Latitude>37.4192</Latitude> <Longitude>-122.057</Longitude> </Response> So we’ll build some data transfer classes to hold the location information, like this: public class LocationInfo { public string Country { get; set; } public string RegionName { get; set; } public string City { get; set; } public string ZipPostalCode { get; set; } public LatLong Position { get; set; } } public class LatLong { public float Lat { get; set; } public float Long { get; set; } } And now hitting the service is pretty simple: public static LocationInfo HostIpToPlaceName(string ip) { string url = "http://ipinfodb.com/ip_query.php?ip={0}&timezone=false"; url = String.Format(url, ip); var result = XDocument.Load(url); var location = (from x in result.Descendants("Response") select new LocationInfo { City = (string)x.Element("City"), RegionName = (string)x.Element("RegionName"), Country = (string)x.Element("CountryName"), ZipPostalCode = (string)x.Element("CountryName"), Position = new LatLong { Lat = (float)x.Element("Latitude"), Long = (float)x.Element("Longitude") } }).First(); return location; } Getting The User’s IP Okay, but first we need the end user’s IP, and you’d think it would be as simple as reading the value from HttpContext: HttpContext.Current.Request.UserHostAddress But you’d be wrong. Sorry. UserHostAddress just wraps HttpContext.Current.Request.ServerVariables["REMOTE_ADDR"], but that doesn’t get you the IP for users behind a proxy. That’s in another header, “HTTP_X_FORWARDED_FOR". So you can either hit a wrapper and then check a header, or just check two headers. I went for uniformity: string SourceIP = string.IsNullOrEmpty(Request.ServerVariables["HTTP_X_FORWARDED_FOR"]) ? Request.ServerVariables["REMOTE_ADDR"] : Request.ServerVariables["HTTP_X_FORWARDED_FOR"]; We’re almost set to wrap this up, but first let’s talk about our views. Yes, views, because we’ll have two. Selecting the View We wanted to make it easy for people to include the flair in their sites, so we looked around at how other people were doing this. The StackOverflow folks have a pretty good flair system, which allows you to include the flair in your site as either an IFRAME reference or a Javascript include. We’ll do both. We have a ServicesController to handle use of the site information outside of NerdDinner.com, so this fits in pretty well there. We’ll be displaying the same information for both HTML and Javascript flair, so we can use one Flair controller action which will return a different view depending on the requested format. Here’s our general flow for our controller action: Get the user’s IP Translate it to a location Grab the top three upcoming dinners that are near that location Select the view based on the format (defaulted to “html”) Return a FlairViewModel which contains the list of dinners and the location information public ActionResult Flair(string format = "html") { string SourceIP = string.IsNullOrEmpty( Request.ServerVariables["HTTP_X_FORWARDED_FOR"]) ? Request.ServerVariables["REMOTE_ADDR"] : Request.ServerVariables["HTTP_X_FORWARDED_FOR"]; var location = GeolocationService.HostIpToPlaceName(SourceIP); var dinners = dinnerRepository. FindByLocation(location.Position.Lat, location.Position.Long). OrderByDescending(p => p.EventDate).Take(3); // Select the view we'll return. // Using a switch because we'll add in JSON and other formats later. string view; switch (format.ToLower()) { case "javascript": view = "JavascriptFlair"; break; default: view = "Flair"; break; } return View( view, new FlairViewModel { Dinners = dinners.ToList(), LocationName = string.IsNullOrEmpty(location.City) ? "you" : String.Format("{0}, {1}", location.City, location.RegionName) } ); } Note: I’m not in love with the logic here, but it seems like overkill to extract the switch statement away when we’ll probably just have two or three views. What do you think? The HTML View The HTML version of the view is pretty simple – the only thing of any real interest here is the use of an extension method to truncate strings that are would cause the titles to wrap. public static string Truncate(this string s, int maxLength) { if (string.IsNullOrEmpty(s) || maxLength <= 0) return string.Empty; else if (s.Length > maxLength) return s.Substring(0, maxLength) + "..."; else return s; }   So here’s how the HTML view ends up looking: <%@ Page Title="" Language="C#" Inherits="System.Web.Mvc.ViewPage<FlairViewModel>" %> <%@ Import Namespace="NerdDinner.Helpers" %> <%@ Import Namespace="NerdDinner.Models" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Nerd Dinner</title> <link href="/Content/Flair.css" rel="stylesheet" type="text/css" /> </head> <body> <div id="nd-wrapper"> <h2 id="nd-header">NerdDinner.com</h2> <div id="nd-outer"> <% if (Model.Dinners.Count == 0) { %> <div id="nd-bummer"> Looks like there's no Nerd Dinners near <%:Model.LocationName %> in the near future. Why not <a target="_blank" href="http://www.nerddinner.com/Dinners/Create">host one</a>?</div> <% } else { %> <h3> Dinners Near You</h3> <ul> <% foreach (var item in Model.Dinners) { %> <li> <%: Html.ActionLink(String.Format("{0} with {1} on {2}", item.Title.Truncate(20), item.HostedBy, item.EventDate.ToShortDateString()), "Details", "Dinners", new { id = item.DinnerID }, new { target = "_blank" })%></li> <% } %> </ul> <% } %> <div id="nd-footer"> More dinners and fun at <a target="_blank" href="http://nrddnr.com">http://nrddnr.com</a></div> </div> </div> </body> </html> You’d include this in a page using an IFRAME, like this: <IFRAME height=230 marginHeight=0 src="http://nerddinner.com/services/flair" frameBorder=0 width=160 marginWidth=0 scrolling=no></IFRAME> The Javascript view The Javascript flair is written so you can include it in a webpage with a simple script include, like this: <script type="text/javascript" src="http://nerddinner.com/services/flair?format=javascript"></script> The goal of this view is very similar to the HTML embed view, with a few exceptions: We’re creating a script element and adding it to the head of the document, which will then document.write out the content. Note that you have to consider if your users will actually have a <head> element in their documents, but for website flair use cases I think that’s a safe bet. Since the content is being added to the existing page rather than shown in an IFRAME, all links need to be absolute. That means we can’t use Html.ActionLink, since it generates relative routes. We need to escape everything since it’s being written out as strings. We need to set the content type to application/x-javascript. The easiest way to do that is to use the <%@ Page ContentType%> directive. <%@ Page Language="C#" Inherits="System.Web.Mvc.ViewPage<NerdDinner.Models.FlairViewModel>" ContentType="application/x-javascript" %> <%@ Import Namespace="NerdDinner.Helpers" %> <%@ Import Namespace="NerdDinner.Models" %> document.write('<script>var link = document.createElement(\"link\");link.href = \"http://nerddinner.com/content/Flair.css\";link.rel = \"stylesheet\";link.type = \"text/css\";var head = document.getElementsByTagName(\"head\")[0];head.appendChild(link);</script>'); document.write('<div id=\"nd-wrapper\"><h2 id=\"nd-header\">NerdDinner.com</h2><div id=\"nd-outer\">'); <% if (Model.Dinners.Count == 0) { %> document.write('<div id=\"nd-bummer\">Looks like there\'s no Nerd Dinners near <%:Model.LocationName %> in the near future. Why not <a target=\"_blank\" href=\"http://www.nerddinner.com/Dinners/Create\">host one</a>?</div>'); <% } else { %> document.write('<h3> Dinners Near You</h3><ul>'); <% foreach (var item in Model.Dinners) { %> document.write('<li><a target=\"_blank\" href=\"http://nrddnr.com/<%: item.DinnerID %>\"><%: item.Title.Truncate(20) %> with <%: item.HostedBy %> on <%: item.EventDate.ToShortDateString() %></a></li>'); <% } %> document.write('</ul>'); <% } %> document.write('<div id=\"nd-footer\"> More dinners and fun at <a target=\"_blank\" href=\"http://nrddnr.com\">http://nrddnr.com</a></div></div></div>'); Getting IP’s for Testing There are a variety of online services that will translate a location to an IP, which were handy for testing these out. I found http://www.itouchmap.com/latlong.html to be most useful, but I’m open to suggestions if you know of something better. Next steps I think the next step here is to minimize load – you know, in case people start actually using this flair. There are two places to think about – the NerdDinner.com servers, and the services we’re using for Geolocation. I usually think about caching as a first attack on server load, but that’s less helpful here since every user will have a different IP. Instead, I’d look at taking advantage of Asynchronous Controller Actions, a cool new feature in ASP.NET MVC 2. Async Actions let you call a potentially long-running webservice without tying up a thread on the server while waiting for the response. There’s some good info on that in the MSDN documentation, and Dino Esposito wrote a great article on Asynchronous ASP.NET Pages in the April 2010 issue of MSDN Magazine. But let’s think of the children, shall we? What about ipinfodb.com? Well, they don’t have specific daily limits, but they do throttle you if you put a lot of traffic on them. From their FAQ: We do not have a specific daily limit but queries that are at a rate faster than 2 per second will be put in "queue". If you stay below 2 queries/second everything will be normal. If you go over the limit, you will still get an answer for all queries but they will be slowed down to about 1 per second. This should not affect most users but for high volume websites, you can either use our IP database on your server or we can whitelist your IP for 5$/month (simply use the donate form and leave a comment with your server IP). Good programming practices such as not querying our API for all page views (you can store the data in a cookie or a database) will also help not reaching the limit. So the first step there is to save the geolocalization information in a time-limited cookie, which will allow us to look up the local dinners immediately without having to hit the geolocation service.

    Read the article

  • The Winds of Change are a Blowin&rsquo;

    - by Ajarn Mark Caldwell
    For six years I have been an avid and outspoken fan and paying customer of SourceGear products…from Vault to Dragnet to Fortress and on to Vault Professional, but that is all changing now.  Not the fan part, but the paying customer part.  I’m still a huge fan.  I think that SourceGear does a great job with their product and support has been fantastic when needed (which is not very often).  I think that Eric Sink has done a fine job building a quality company and products, and I appreciate his contributions to the tech community through this blogging and books.  I still think their products are high quality and do a fantastic job of what they do.  But there’s the rub…what they do is no longer enough for me. As I have rebuilt our development team over the last couple of years, and we have begun to investigate Scrum and Kanban, I realize that I need more visibility into the progress of the team.  I need better project management tools, and this is where Vault Professional lags behind several other tools.  Granted, in the latest release (Vault 6.0) they added a nice time tracking feature, but I want more.  (Note, I did contact SourceGear about my quest for more, but apparently, the rest of their customer base has not been clamoring for this and so they have not built it.  Granted, I wasn’t clamoring for it either until just recently, but unfortunately for SourceGear, I want it now and don’t want to wait for them to build it into their system.) Ironically, it was SourceGear themselves who started to turn me on to the possibilities of other tools.  They built a limited integration with Axosoft OnTime which I read about several times on their support site (I used to regularly read and occasionally comment on their Support Forum).  I decided to check out OnTime and was very impressed with the tool for work item tracking and project management (not to mention their great Scrum Master in 10 Minutes video).  I fell in love with the capabilities of OnTime.  Unfortunately, the integration with Vault for source control management was, as I mentioned, limited.  I could have forfeited the integration between work items and source code, but there is too much benefit to linking check-ins to work items for me to give that up.  So then I did what was previously unthinkable for me, I considered switching not just the work tracking tool, but also the source code management tool.  This was really stepping outside my comfort zone because source code is Gold, and not to be trifled with.  When you find a good weapon to protect your gold, stick with it. I looked at Git and Tortoise SVN, but the integration methods for those was pretty rough compared to what I was used to.  The recommended tool from Axosoft’s point of view appeared to be RocketSVN, but I really wasn’t sure I wanted to go the “flavor of Subversion” route.  Then I started thinking about that other tool I liked back when I first chose to go with Vault, but couldn’t afford:  Team Foundation Server.  And what do you know…Microsoft has not only radically improved it over that version from back in 2006, but they also came to their senses about how it should be licensed, and it is much more affordable now.  So I started looking into the latest capabilities in the 2012 version, and I fell in love all over again. I really went deep on checking out the tools.  I watched numerous webcasts from Microsoft partners, went to a beta preview on Microsoft’s campus, and watched a lot of Channel 9 videos on the new ALM features (oooh…shiny).  Frankly, I was very impressed with the capabilities of the newest version, and figured this was probably our direction.  As an interesting twist of fate, one of my employees crossed paths with an ALM Consultant from Northwest Cadence, a local Microsoft Partner, and one of the companies that produced several of the webcasts that I had been watching.  So I gave Bryon a call and started grilling him to see if he really knew anything or was just another guy who couldn’t find a job so he called himself a consultant.  It turns out Bryon actually knows a lot, especially in an area that was becoming a frustration point for us: Branching strategies and automated builds (that’s probably a whole separate blog entry).  As we talked, Bryon suggested we look into doing a DTDPS (Developer Tools Deployment Planning Services) session with his company.  This is a service that can be paid for by Microsoft Enterprise Agreement planning services credits or SA training benefits, and, again, coincidentally, we had several that were just about to expire, so I put them to good use. The DTDPS sessions were great; and Bryon, Rick, and the rest of the folks at Northwest Cadence have been a pleasure to work with.  We have just purchased a new server for our TFS rollout and are planning the steps and options right now.  This is still a big project ahead of us to not only install and configure TFS, but also to load all of our source code (many different systems, not just one program) and transition to the new way of life with TFS, but I am convinced that it is the right move for my team at this point in time.  We need the new capabilities that are in alignment with Scrum and Kanban methodologies in order to more efficiently manage all the different projects that we have going on at one time. I would still wholeheartedly endorse SourceGear’s products and Axosoft’s OnTime for those whose needs are met by those tools, but for me and my team, I think that TFS is the right fit, and I am looking forward to the change.

    Read the article

  • ASP.NET Web API - Screencast series Part 2: Getting Data

    - by Jon Galloway
    We're continuing a six part series on ASP.NET Web API that accompanies the getting started screencast series. This is an introductory screencast series that walks through from File / New Project to some more advanced scenarios like Custom Validation and Authorization. The screencast videos are all short (3-5 minutes) and the sample code for the series is both available for download and browsable online. I did the screencasts, but the samples were written by the ASP.NET Web API team. In Part 1 we looked at what ASP.NET Web API is, why you'd care, did the File / New Project thing, and did some basic HTTP testing using browser F12 developer tools. This second screencast starts to build out the Comments example - a JSON API that's accessed via jQuery. This sample uses a simple in-memory repository. At this early stage, the GET /api/values/ just returns an IEnumerable<Comment>. In part 4 we'll add on paging and filtering, and it gets more interesting.   The get by id (e.g. GET /api/values/5) case is a little more interesting. The method just returns a Comment if the Comment ID is valid, but if it's not found we throw an HttpResponseException with the correct HTTP status code (HTTP 404 Not Found). This is an important thing to get - HTTP defines common response status codes, so there's no need to implement any custom messaging here - we tell the requestor that the resource the requested wasn't there.  public Comment GetComment(int id) { Comment comment; if (!repository.TryGet(id, out comment)) throw new HttpResponseException(HttpStatusCode.NotFound); return comment; } This is great because it's standard, and any client should know how to handle it. There's no need to invent custom messaging here, and we can talk to any client that understands HTTP - not just jQuery, and not just browsers. But it's crazy easy to consume an HTTP API that returns JSON via jQuery. The example uses Knockout to bind the JSON values to HTML elements, but the thing to notice is that calling into this /api/coments is really simple, and the return from the $.get() method is just JSON data, which is really easy to work with in JavaScript (since JSON stands for JavaScript Object Notation and is the native serialization format in Javascript). $(function() { $("#getComments").click(function () { // We're using a Knockout model. This clears out the existing comments. viewModel.comments([]); $.get('/api/comments', function (data) { // Update the Knockout model (and thus the UI) with the comments received back // from the Web API call. viewModel.comments(data); }); }); }); That's it! Easy, huh? In Part 3, we'll start modifying data on the server using POST and DELETE.

    Read the article

  • Oracle Enterprise Data Quality: Ever Integration-ready

    - by Mala Narasimharajan
    It is closing in on a year now since Oracle’s acquisition of Datanomic, and the addition of Oracle Enterprise Data Quality (EDQ) to the Oracle software family. The big move has caused some big shifts in emphasis and some very encouraging excitement from the field.  To give an illustration, combined with a shameless promotion of how EDQ can help to give quick insights into your data, I did a quick Phrase Profile of the subject field of emails to the Global EDQ mailing list since it was set up last September. The results revealed a very clear theme:   Integration, Integration, Integration! As well as the important Siebel and Oracle Data Integrator (ODI) integrations, we have been asked about integration with a huge variety of Oracle applications, including EBS, Peoplesoft, CRM on Demand, Fusion, DRM, Endeca, RightNow, and more - and we have not stood still! While it would not have been possible to develop specific pre-integrations with all of the above within a year, we have developed a package of feature-rich out-of-the-box web services and batch processes that can be plugged into any application or middleware technology with ease. And with Siebel, they work out of the box. Oracle Enterprise Data Quality version 9.0.4 includes the Customer Data Services (CDS) pack – a ready set of standard processes with standard interfaces, to provide integrated: Address verification and cleansing  Individual matching Organization matching The services can are suitable for either Batch or Real-Time processing, and are enabled for international data, with simple configuration options driving the set of locale-specific dictionaries that are used. For example, large dictionaries are provided to support international name transcription and variant matching, including highly specialized handling for Arabic, Japanese, Chinese and Korean data. In total across all locales, CDS includes well over a million dictionary entries.   Excerpt from EDQ’s CDS Individual Name Standardization Dictionary CDS has been developed to replace the OEM of Informatica Identity Resolution (IIR) for attached Data Quality on the Oracle price list, but does this in a way that creates a ‘best of both worlds’ situation for customers, who can harness not only the out-of-the-box functionality of pre-packaged matching and standardization services, but also the flexibility of OEDQ if they want to customize the interfaces or the process logic, without having to learn more than one product. From a competitive point of view, we believe this stands us in good stead against our key competitors, including Informatica, who have separate ‘Identity Resolution’ and general DQ products, and IBM, who provide limited out-of-the-box capabilities (with a steep learning curve) in both their QualityStage data quality and Initiate matching products. Here is a brief guide to the main services provided in the pack: Address Verification and Standardization EDQ’s CDS Address Cleaning Process The Address Verification and Standardization service uses EDQ Address Verification (an OEM of Loqate software) to verify and clean addresses in either real-time or batch. The Address Verification processor is wrapped in an EDQ process – this adds significant capabilities over calling the underlying Address Verification API directly, specifically: Country-specific thresholds to determine when to accept the verification result (and therefore to change the input address) based on the confidence level of the API Optimization of address verification by pre-standardizing data where required Formatting of output addresses into the input address fields normally used by applications Adding descriptions of the address verification and geocoding return codes The process can then be used to provide real-time and batch address cleansing in any application; such as a simple web page calling address cleaning and geocoding as part of a check on individual data.     Duplicate Prevention Unlike Informatica Identity Resolution (IIR), EDQ uses stateless services for duplicate prevention to avoid issues caused by complex replication and synchronization of large volume customer data. When a record is added or updated in an application, the EDQ Cluster Key Generation service is called, and returns a number of key values. These are used to select other records (‘candidates’) that may match in the application data (which has been pre-seeded with keys using the same service). The ‘driving record’ (the new or updated record) is then presented along with all selected candidates to the EDQ Matching Service, which decides which of the candidates are a good match with the driving record, and scores them according to the strength of match. In this model, complex multi-locale EDQ techniques can be used to generate the keys and ensure that the right balance between performance and matching effectiveness is maintained, while ensuring that the application retains control of data integrity and transactional commits. The process is explained below: EDQ Duplicate Prevention Architecture Note that where the integration is with a hub, there may be an additional call to the Cluster Key Generation service if the master record has changed due to merges with other records (and therefore needs to have new key values generated before commit). Batch Matching In order to allow customers to use different match rules in batch to real-time, separate matching templates are provided for batch matching. For example, some customers want to minimize intervention in key user flows (such as adding new customers) in front end applications, but to conduct a more exhaustive match on a regular basis in the back office. The batch matching jobs are also used when migrating data between systems, and in this case normally a more precise (and automated) type of matching is required, in order to minimize the review work performed by Data Stewards.  In batch matching, data is captured into EDQ using its standard interfaces, and records are standardized, clustered and matched in an EDQ job before matches are written out. As with all EDQ jobs, batch matching may be called from Oracle Data Integrator (ODI) if required. When working with Siebel CRM (or master data in Siebel UCM), Siebel’s Data Quality Manager is used to instigate batch jobs, and a shared staging database is used to write records for matching and to consume match results. The CDS batch matching processes automatically adjust to Siebel’s ‘Full Match’ (match all records against each other) and ‘Incremental Match’ (match a subset of records against all of their selected candidates) modes. The Future The Customer Data Services Pack is an important part of the Oracle strategy for EDQ, offering a clear path to making Data Quality Assurance an integral part of enterprise applications, and providing a strong value proposition for adopting EDQ. We are planning various additions and improvements, including: An out-of-the-box Data Quality Dashboard Even more comprehensive international data handling Address search (suggesting multiple results) Integrated address matching The EDQ Customer Data Services Pack is part of the Enterprise Data Quality Media Pack, available for download at http://www.oracle.com/technetwork/middleware/oedq/downloads/index.html.

    Read the article

  • Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff

    - by The Geek
    Yesterday Microsoft announced the release candidate of Internet Explorer 9, which is very close to the final product. Here’s a screenshot tour of the most interesting new stuff, as well as answers to your questions. The most important question is should you install this version? And the answer is absolutely yes. Even if you don’t use IE, it’s better to have a newer, more secure version on your PC. What’s New Under the Hood in Release Candidate vs Beta? If you want to see the full list of changes with all the original marketing detail, you can read Microsoft’s Beauty of the Web page, but here’s the highlights that you might be interested in. Improved Performance – they’ve made a lot of changes, and it really feels faster, especially when using more intensive web apps like Gmail. Power Consumption Settings – since the JavaScript engine in any browser uses a lot of CPU power, they’ve now integrated it into the power settings, so if you’re on battery it will use less CPU, and save battery life. This is really a great change. UI Changes – The tab bar can now be moved below the address bar (see below for more), they’ve shaved some pixels off the design to save space, and now you can toggle the Menu bar to be always on. Pinned Sites – now you can pin multiple pages to a single taskbar button. Very useful if you always use a couple web apps together. You can also pin a site in InPrivate mode. FlashBlock and AdBlock are Integrated (sorta) – there’s a new ActiveX filtering that lets you enable plug-ins only for sites you trust. There’s also a tracking protection list that can block certain content (which can obviously be used to block ads). Geolocation – while a lot of privacy conscious people might complain about this, if you use your laptop while traveling, it’s really useful to have geo-located features when using Google Maps, etc. Don’t worry, it won’t leak your privacy by default. WebM Video – Yeah, Google recently removed H.264 from Chrome, but Microsoft has added Google’s WebM video format to Internet Explorer. Keep reading for more about using the new features Latest Features How-To Geek ETC Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware How to Change the Default Application for Android Tasks Stop Believing TV’s Lies: The Real Truth About "Enhancing" Images The How-To Geek Valentine’s Day Gift Guide Inspire Geek Love with These Hilarious Geek Valentines The 50 Faces of Mario Death [Infographic] Clean Up Google Calendar’s Interface in Chrome and Iron The Rise and Fall of Kramerica? [Seinfeld Video] GNOME Shell 3 Live CDs for OpenSUSE and Fedora Available for Testing Picplz Offers Special FX, Sharing, and Backup of Your Smartphone Pics BUILD! An Epic LEGO Stop Motion Film [VIDEO]

    Read the article

  • Lenovo Thinkpad L430 overheating due to fan problems

    - by Dirk B.
    This is the same question as Fan not working on thinkpad L430, laptop overheating, but that question has been marked as a duplicate, which it is not, and I cannot reopen it. I'm having problems controlling the fan of my Lenovo Thinkpad L430. The fan doesn't start. Without any fan control installed the fan just doesn't run. If I run stress, it does run a little, but it's nowhere near the speed it should be. After a while, the laptop just overheats and stops. I Tried to install tp-fancontrol, and enabled thinkpad_acpi fancontrol=1, but to no avail. If I try to set the fan speed manually, it doesn't start up. In windows, there's a program called TPFanControl. It turns out that this laptop uses a different scheme to control the fan than other thinkpads. The level runs from 0 to 255, and max = 0 and min=255. Now I'm looking for a fan control program that works for linux. Does anyone know if it actually exists? Anyone with any experience on fan control on a L430? Update: sudo pwmconfig gives the following output: # pwmconfig revision 5857 (2010-08-22) This program will search your sensors for pulse width modulation (pwm) controls, and test each one to see if it controls a fan on your motherboard. Note that many motherboards do not have pwm circuitry installed, even if your sensor chip supports pwm. We will attempt to briefly stop each fan using the pwm controls. The program will attempt to restore each fan to full speed after testing. However, it is ** very important ** that you physically verify that the fans have been to full speed after the program has completed. Found the following devices: hwmon0 is acpitz hwmon1/device is coretemp hwmon2/device is thinkpad Found the following PWM controls: hwmon2/device/pwm1 hwmon2/device/pwm1 is currently setup for automatic speed control. In general, automatic mode is preferred over manual mode, as it is more efficient and it reacts faster. Are you sure that you want to setup this output for manual control? (n) y Giving the fans some time to reach full speed... Found the following fan sensors: hwmon2/device/fan1_input current speed: 0 ... skipping! There are no working fan sensors, all readings are 0. Make sure you have a 3-wire fan connected. You may also need to increase the fan divisors. See doc/fan-divisors for more information. update: If you need it, lspci is available here

    Read the article

  • SQLAuthority News – Why VoIP Service Providers Should Think About NuoDB’s Geo Distribution

    - by Pinal Dave
    You can always tell when someone’s showing off their cool, cutting edge comms technology. They tend to raise their voice a lot. Back in the day they’d announce their gadget leadership to the rest of the herd by shouting into their cellphone. Usually the message was no more urgent than “Hi, I’m on my cellphone!” Now the same types will loudly name-drop a different technology to the rest of the airport lounge. “I’m leveraging the wifi,” a fellow passenger bellowed, the other day, as we filtered through the departure gate. Nobody needed to know that, but the subtext was “look at me everybody”. You can tell the really advanced mobile user – they tend to whisper. Their handset has a microphone (how cool is that!) and they know how to use it. Sometimes these shouty public broadcasters aren’t even connected anyway because the database for their Voice over IP (VoIP) platform can’t cope. This will happen if they are using a traditional SQL model to try and cope with a phone network which has far flung offices and hundreds of mobile employees. That, like shouting into your phone, is just wrong on so many levels. What VoIP needs now is a single, logical database across multiple servers in different geographies. It needs to be updated in real-time and automatically scaled out during times of peak demand. A VoIP system should scale up to handle increased traffic, but just as importantly is must then go back down in the off peak hours. Try this with a MySQL database. It can’t scale easily enough, so it will keep your developers busy. They’ll have spent many hours trying to knit the different databases together. Traditional relational databases can possibly achieve this, at a price. Mind you, you could extend baked bean cans and string to every point on the network and that would be no less elegant. That’s not really following engineering principles though is it? Having said that, most telcos and VoIP systems use a separate, independent solution for each office location, which they link together – loosely.  The more office locations, the more complex and expensive the solution becomes and so the more you spend on maintenance. Ideally, you’d have a fluid system that can automatically shift its shape as the need arises. That’s the point of software isn’t it – it adapts. Otherwise, we might as well return to the old days. A MySQL system isn’t exactly baked bean cans attached by string, but it’s closer in spirit to the old many teethed mechanical beast that was employed in the first type of automated switchboard. NuoBD’s NewSQL is designed to be a single database that works across multiple servers, which can scale easily, and scale on demand. That’s one system that gives high connectivity but no latency, complexity or maintenance issues. MySQL works in some circumstances, but a period of growth isn’t one of them. So as a company moves forward, the MySQL database can’t keep pace. Data storage and data replication errors creep in. Soon the diaspora of offices becomes a problem. Your telephone system isn’t just distributed, it is literally all over the place. Though voice calls are often a software function, some of the old habits of telephony remain. When you call an engineer out, some of them will listen to what you’re asking for and announce that it cannot be done. This is what happens if you ask, say, database engineers familiar with Oracle or Microsoft to fulfill your wish for a low maintenance system built on a single, fluid, scalable database. No can do, they’d say. In fact, I heard one shouting something similar into his VoIP handset at the airport. “I can’t get on the network, Mac. I’m on MySQL.” You can download NuoDB from here. “NuoDB provides the ability to replicate data globally in real-time, which is not available with any other product offering,” states Weeks.  “That alone is remarkable and it works. I’ve seen it. I’ve used it.  I’ve tested it. The ability to deploy NuoDB removes a tremendous burden from our support and engineering teams.” Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: NuoDB

    Read the article

  • My Ubuntu Touch seems to be broken no matter how many different files I try

    - by zeokila
    So I'm planning on testing out Ubuntu Touch, and developing some applications for it so I thought I would flash it to my Nexus 4 that was already unlocked, and running Paranoid Android and the kernel associated. I headed to Ubuntu's website, browsed around and came across this page: Touch/Install - Ubuntu Wiki I followed Step 1 perfectly, word by word. Its seems to me that everything on that part is fine. I skipped Step 2 having already done that for Paranoid Android, and then I follow 3 and 4 word to word also. Using the command phablet-flash -b everything seemed to be fine. So it booted up and all seemed normal, but it wasn't. Here are some major bugs that only seem to happen to me. So I was greeted with a normal lock screen: But one of the first noticable things on the home screen is that I only have 4 tabs, not 5: Some of the applications that are supposed to work do not (I know some are dummies) like here with the calculator, this is what I get: On the homescreen it is a black box, it will crash a couple of seconds later: Another annoying problem is that when I want to close apps, I have from right to left, if I close an app on the left first it will close it, but it will open the app to it's right, weird: Yet another bug, this one is in the pull down drawer, when you just click on it, you can see that not all the icons are there: Pretty much everything else works as it should, but my main problem is that there is no telephony, I'm not sure how it works exactly, but I'm never asked a SIM code (I'm guessing you need that?), I can't compose SMS's and can't dial numbers, it won't let me select the 'Send' or 'Call' button. I've after tried manual installs all over the place with these files: http://cdimage.ubuntu.com/ubuntu-touch-preview/daily-preinstalled/current/saucy-preinstalled-armel+mako.zip (Says 44MB on site - 46.6MB on my laptop) http://cdimage.ubuntu.com/ubuntu-touch-preview/daily-preinstalled/current/saucy-preinstalled-phablet-armhf.zip (Says 366MB on site - 383.2MB on my laptop) There are some weird size differences between what the site told me and what I downloaded, but I've tried re-downloading just to end up with the same file. And it just alway ends up with the same problems. No telephony and those weird bugs. So my question is, how the hell can I get the same version as everyone else, with the ability to send texts and call and open the calculator and more? Also, definitely running saucy: And maybe useful? This is what's in the device info:

    Read the article

  • The .NET Rocks! Visual Studio 2010 Road Trip

    - by Laila
    Carl Franklin and Richard Campbell, the two .NET Rocks radio show hosts, have decided to set off to 15 cities in the US, between April 19th and May 7th, in their DotNetMobile (a 30 foot RV). What for you'll ask me? Well, to drive around the US, meet up with .NET developers, and show off the latest and greatest in Visual Studio 2010 and .NET 4.0! Each evening, they stop in a city and host a three hour event in front of a 100 to 300 crowd of developers, where Carl is showing off media features in Silverlight 4 and their road trip tracking application, whilst Richard is demo-ing the web performance testing features of VS2010 using his portable server rig. But before they take to the stage, they have a special guest brought in - a rock star from the Visual Studio world - whom they interview for an hour as a .NET Rock episode. So far, they've had - amongst others - Phil Haack, a Program Manager with the ASP.NET team working on ASP.NET MVC, Dan Fernandez, an Evangelism Manager in the Developer and Platform Evangelism team at Microsoft, and Beth Massi, Senior Program Manager on the Visual Studio Community Team at Microsoft. I love the fact that the audience gets a chance to participate, ask questions and have a great laugh, as you can hear in the first episode! Along the way, the .NET Rocks guys are giving away great prizes (including .NET Reflector Pro, ANTS Memory Profiler licenses, and "40" LCD TVs!). Even more out of the ordinary, at each stop on the road trip, one lucky attendee (who entered in the Ride Along competition) gets to jump in the RV with Carl and Richard and ride along with them to the next stop on the roadtrip. How cool is that! Richard told us: "Our first winner in Mountain View was Eric Ziko. I was looking for him to announce that he had won, when he found us and gave us a bottle of scotch he had brought just to say 'thanks for the great show'. We all had a toast from the bottle the next night when he headed back home." Cheeky! There's still space to a few of these events, so if you want to attend, register now, because it's first come first serve. We're grateful to Richard and Carl for giving us the opportunity to sponsor this major .NET event! A unique .NET adventure worth following for sure. Cheers, Laila

    Read the article

  • Administering Team Foundation Server 2010 Class resource links

    - by John Alexander
    Here are the resource links for the Administering Team Foundation Server 2010 Class from last week in Minneapolis.  Microsoft® Visual Studio® 2010 and Team Foundation Server® 2010 RTM virtual machine for Microsoft® Virtual PC 2007 SP1 http://www.microsoft.com/downloads/en/details.aspx?FamilyID=5e13b15a-fd74-4cd7-b53e-bdf9456855bd Microsoft® Visual Studio® 2010 and Team Foundation Server® 2010 RTM virtual machine for Windows Virtual PC http://www.microsoft.com/downloads/en/details.aspx?FamilyID=509c3ba1-4efc-42b5-b6d8-0232b2cbb26e Microsoft® Visual Studio® 2010 and Team Foundation Server® 2010 RTM virtual machine for Windows Server 2008 Hyper-V http://www.microsoft.com/downloads/en/details.aspx?FamilyID=e0198b64-4acb-4709-b07f-359fb4d523bc Customizable process guidance http://blogs.msdn.com/b/allclark/archive/2010/08/12/customizable-process-guidance.aspx The 5 most read Visual Studio ALM help topics on MSDN http://blogs.msdn.com/b/allclark/archive/2010/11/12/the-5-most-read-visual-studio-alm-help-topics-on-msdn.aspx Inside TFS http://visualstudiomagazine.com/Articles/List/Inside-TFS.aspx Testing Topics http://msdn.microsoft.com/en-us/library/dd286594.aspx Blogs http://community.accentient.com http://geekswithblogs.net Branching Guide http://tfsbranchingguideiii.codeplex.com/ Great VSTS blog http://geekswithblogs.net/hinshelm/Default.aspx My Blog :D http://geekswithblogs.net/jalexander/Default.aspx Visual Studio Forums http://bit.ly/fE16u3 TFS Migration and Integration Solutions http://bit.ly/cLaBnT TFS Migration and Integration Tools (VS ALM Rangers) http://bit.ly/9tHWdG TFS Migration and Integration Platform (CodePlex) http://tfsintegration.codeplex.com Team Foundation Server SDK http://code.msdn.microsoft.com/TfsSdk Migrate and Integration Forum http://bit.ly/f4Lnps Team Foundation Server Widgets http://www.tfswidgets.com TFS Sdk http://code.msdn.microsoft.com/TfsSdk TFS Migration and Integration Solutions http://bit.ly/cLaBnT TFS Integration Tools Forum http://bit.ly/f4Lnps TFS Integration Tools http://bit.ly/9tHWdG TFS Integration Platform http://tfsintegration.codeplex.com VS Upgrade Guide http://vs2010upgradeguide.codeplex.com Updating an Upgraded Team Project to Access New Features http://bit.ly/9cCcMP Team Foundation Power Tools http://bit.ly/dfNVQk Team Foundation Administration Tool http://tfsadmin.codeplex.com Using Team Foundation Server Command-Line Tools http://bit.ly/hCyozJ Changing Groups and Permissions with TFSSecurity http://bit.ly/esIjgw Unofficial Prep guide for TFS 2010 Administration Exam (70-512) http://geekswithblogs.net/enriquelima/archive/2010/07/21/unofficial-prep-guide-for-tfs-2010-administration-exam-70-512.aspx Another Prep Guide http://bit.ly/bpO30R Professional Application Lifecycle Management with VS 2010 Book http://bit.ly/9rCIRj Search CodePlex for TFS related apps http://www.codeplex.com/site/search Visual Studio Gallery http://visualstudiogallery.com TFS Widgets http://tfswidgets.com Migrate from Visual SourceSafe http://bit.ly/8XPSRh Team Foundation Server MSSCCI Provider 2010 http://bit.ly/dst1OQ Attrice TFS Sidekicks www.attrice.info/cm/tfs Hosted TFS http://bit.ly/cMZdvp Manually Processing the Team Foundation Server 2010 Data Warehouse and Analysis Services Database http://bit.ly/aG5oEh TFS 2005, 2008 and 2010 Compatibility http://shrinkster.com/1dhj

    Read the article

  • A Rose by Any Other Name..

    - by Geoff N. Hiten
    It is always a good start when you can steal a title line from one of the best writers in the English language.  Let’s hope I can make the rest of this post live up to the opening.  One recurring problem with SQL server is moving databases to new servers.  Client applications use a variety of ways to resolve SQL Server names, some of which are not changed easily <cough SharePoint /cough>.  If you happen to be using default instances on both the source and target SQL Server, then the solution is pretty simple.  You create (or bug the network admin until she creates) two DNS “A” records. One points the old name to the new IP address.  The other creates a new alias for the old server, since the original system name is now redirected.  Note this will redirect ALL traffic from the old server to the new server, including RDP and file share connection attempts.    Figure 1 – Microsoft DNS MMC Snap-In   Figure 2 – DNS New Host Dialog Box Both records are necessary so you can still access the old server via an alternate name. Server Role IP Address Name Alias Source 10.97.230.60 SQL01 SQL01_Old Target 10.97.230.80 SQL02 SQL01 Table 1 – Alias List If you or somebody set up connections via IP address, you deserve to have to go to each app and fix it by hand.  That is the only way to fix that particular foul-up. If have to deal with Named Instances either as a source or a target, then it gets more complicated.  The standard fix is to use the SQL Server Configuration Manager (or one of its earlier incarnations) to create a SQL client alias to redirect the connection.  This can be a pain installing and configuring the app on multiple client servers.  The good news is that SQL Server Configuration Manager AND all of its earlier versions simply write a few registry keys.  Extracting the keys into a .reg file makes centralized automated deployment a snap. If the client is a 32-bit system, you have to extract the native key.  If it is a 64-bit, you have to extract the native key and the WoW (32 bit on 64 bit host) key. First, pick a development system to create the actual registry key.  If you do this repeatedly, you can simply edit an existing registry file.  Create the entry using the SQL Configuration Manager.  You must use a 64-bit system to create the WoW key.  The following example redirects from a named instance “SQL01\SQLUtiluty” to a default instance on “SQL02”.   Figure 3 – SQL Server Configuration Manager - Native Figure 3 shows the native key listing. Figure 4 – SQL Server Configuration Manager – WoW If you think you don’t need the WoW key because your app is 64 it, think again.  SQL Server Management Server is a 32-bit app, as are most SQL test utilities.  Always create both keys for 64-bit target systems. Now that the keys exist, we can extract them into a .reg file. Fire up REGEDIT and browse to the following location:  HKLM\Software\Microsoft\MSSQLServer\Client\ConnectTo.  You can also search the registry for the string value of one of the server names (old or new). Right click on the “ConnectTo” label and choose “Export”.  Save with an appropriate name and location.  The resulting file should look something like this: Figure 5 – SQL01_Alias.reg Repeat the process with the location: HKLM\Software\Wow6432Node\Microsoft\MSSQLServer\Client\ConnectTo Note that if you have multiple alias entries, ALL of the entries will be exported.  In that case, you can edit the file and remove the extra aliases. You can edit the files together into a single file.  Just leave a blank line between new keys like this: Figure 6 – SQL01_Alias_All.reg Of course if you have an automatic way to deploy, it makes sense to have an automatic way to Un-deploy.  To delete a registry key, simply edit the .reg file and replace the target with a “-“ sign like so. Figure 7 – SQL01_Alias_UNDO.reg Now we have the ability to move any database to any server without having to install or change any applications on any client server.  The whole process should be transparent to the applications, which makes planning and coordinating database moves a far simpler task.

    Read the article

  • The softer side of BPM

    - by [email protected]
    BPM and RTD are great complementary technologies that together provide a much higher benefit than each of them separately. BPM covers the need for automating processes, making sure that there is uniformity, that rules and regulations are complied with and that the process runs smoothly and quickly processes the units flowing through it. By nature, this automation and unification can lead to a stricter, less flexible process. To avoid this problem it is common to encounter process definition that include multiple conditional branches and human input to help direct processing in the direction that best applies to the current situation. This is where RTD comes into play. The selection of branches and conditions and the optimization of decisions is better left in the hands of a system that can measure the results of its decisions in a closed loop fashion and make decisions based on the empirical knowledge accumulated through observing the running of the process.When designing a business process there are key places in which it may be beneficial to introduce RTD decisions. These are:Thresholds - whenever a threshold is used to determine the processing of a unit, there may be an opportunity to make the threshold "softer" by introducing an RTD decision based on predicted results. For example an insurance company process may have a total claim threshold to initiate an investigation. Instead of having that threshold, RTD could be used to help determine what claims to investigate based on the likelihood they are fraudulent, cost of investigation and effect on processing time.Human decisions - sometimes a process will let the human participants make decisions of flow. For example, a call center process may leave the escalation decision to the agent. While this has flexibility, it may produce undesired results and asymetry in customer treatment that is not based on objective functions but subjective reasoning by the agent. Instead, an RTD decision may be introduced to recommend escalation or other kinds of treatments.Content Selection - a process may include the use of messaging with customers. The selection of the most appropriate message to the customer given the content can be optimized with RTD.A/B Testing - a process may have optional paths for which it is not clear what populations they work better for. Rather than making the arbitrary selection or selection by committee of the option deeped the best, RTD can be introduced to dynamically determine the best path for each unit.In summary, RTD can be used to make BPM based process automation more dynamic and adaptable to the different situations encountered in processing. Effectively making the automation softer, less rigid in its processing.

    Read the article

  • Configuring Oracle iPlanet WebServer / Oracle Traffic Director to use crypto accelerators on T4-1 servers

    - by mv
    Configuring Oracle iPlanet Web Server / Oracle Traffic Director to use crypto accelerators on T4-1 servers Jyri had written a technical article on Configuring Solaris Cryptographic Framework and Sun Java System Web Server 7 on Systems With UltraSPARC T1 Processors. I tried to find out what has changed since then in T4. I have used a T4-1 SPARC system with Solaris 10. Results slightly vary for Solaris 11.  For Solaris 11, the T4 optimization was implemented in libsoftcrypto.so while it was in pkcs11_softtoken_extra.so for Solaris 10. Overview of T4 processors is here in this blog. Many thanx to Chi-Chang Lin and Julien for their help. 1. Install Oracle iPlanet Web Server / Oracle Traffic Director.  Go to instance/config directory.  # cd /opt/oracle/webserver7/https-hostname.fqdn/config 2. List default PKCS#11 Modules # ../../bin/modutil -dbdir . -listListing of PKCS #11 Modules-----------------------------------------------------------1. NSS Internal PKCS #11 Moduleslots: 2 slots attachedstatus: loadedslot: NSS Internal Cryptographic Servicestoken: NSS Generic Crypto Servicesslot: NSS User Private Key and Certificate Servicestoken: NSS Certificate DB2. Root Certslibrary name: libnssckbi.soslots: 1 slot attachedstatus: loadedslot: NSS Builtin Objectstoken: Builtin Object Token----------------------------------------------------------- 3. Initialize the soft token data store in the $HOME/.sunw/pkcs11_softtoken/ directory # pktool setpin keystore=pkcs11Enter token passphrase: olderpasswordCreate new passphrase: passwordRe-enter new passphrase: passwordPassphrase changed. 4. Offload crypto operations to Solaris Crypto Framework on T4 $ ../../bin/modutil -dbdir . -nocertdb -add SCF -libfile /usr/lib/libpkcs11.so -mechanisms RSA:AES:SHA1:MD5 Module "SCF" added to database. Note that -nocertdb means modutil won't try to open the NSS softoken key database. It doesn't even have to be present. PKCS#11 library used is /usr/lib/libpkcs11.so. If the server is running in 64 bit mode, we have to use /usr/lib/64/libpkcs11.so Unlike T1 and T2, in T4 we do not have to disable mechanisms in softtoken provider using cryptoadm. 5. List again to check that a new module SCF is added # ../../bin/modutil -dbdir . -list Listing of PKCS #11 Modules-----------------------------------------------------------1. NSS Internal PKCS #11 Moduleslots: 2 slots attachedstatus: loadedslot: NSS Internal Cryptographic Servicestoken: NSS Generic Crypto Servicesslot: NSS User Private Key and Certificate Servicestoken: NSS Certificate DB2. SCFlibrary name: /usr/lib/libpkcs11.soslots: 2 slots attachedstatus: loadedslot: Sun Metaslottoken: Sun Metaslotslot: n2rng/0 SUNW_N2_Random_Number_Generator token: n2rng/0 SUNW_N2_RNG 3. Root Certs library name: libnssckbi.so slots: 1 slot attached status: loaded slot: NSS Builtin Objects token: Builtin Object Token----------------------------------------------------------- 6.  Create certificate in “Sun Metaslot” : I have used certutil, but you must use Admin Server CLI / GUI # ../../bin/certutil -S -x -n "Server-Cert" -t "CT,CT,CT" -s "CN=*.fqdn" -d . -h "Sun Metaslot"Enter Password or Pin for "Sun Metaslot": password 7. Verify that the certificate is created properly in “Sun Metslaot” # ../../bin/certutil -L -d . -h "Sun Metaslot"Certificate Nickname Trust AttributesSSL,S/MIME,JAR/XPIEnter Password or Pin for "Sun Metaslot": passwordSun Metaslot:Server-Cert CTu,Cu,Cu# 8. Associate this newly created certificate to http listener using Admin CLI/GUI. After that server.xml should have <http-listener> ...    <ssl>        <server-cert-nickname>Sun Metaslot:Server-Cert</server-cert-nicknamer>    </ssl> Note the prefix "Sun Metaslot" 9. Disable PKCS#11 bypass To use the accelerated AES algorithm, turn off PKCS#11 bypass, and configure modutil to have the AES mechanism go to the Metaslot. After you disable PKCS#11 bypasss using Admin GUI/CLI,  check that server.xml should have <server> ....    <pkcs11>         <enabled>1</enabled>         <allow-bypass>0</allow-bypass>     </pkcs11> With PKCS#11 bypass enabled, Oracle iPlanet Web Server will only use the RSA capability of the T4, provided certificate and key are stored in the T4 slot (Metaslot). Actually, the RSA op is never bypassed in NSS, it's always done with PKCS#11 calls. So the bypass settings won't affect the behavior of the probes for RSA at all. The only thing that matters if where the RSA key and certificate live, ie. which PKCS#11 token, and thus which PKCS#11 module gets called to do the work. If your certificate/key are in the NSS certificate/key db, you will see libsoftokn3/libfreebl libraries doing the RSA work. If they are in the Sun Metaslot, it should be the Solaris code. 10. Start the server instance # ../bin/startserv Oracle iPlanet Web Server 7.0.16 B09/14/2012 03:33Please enter the PIN for the "Sun Metaslot" token: password...info: HTTP3072: http-listener-1: https://hostname.fqdn:80 ready to accept requestsinfo: CORE3274: successful server startup 11. Figure out which process to run this DTrace script on # ps -eaf | grep webservd | grep -v dogwebservd 18224 18223 0 13:17:25 ? 0:07 webservd -d /opt/oracle/webserver7/https-hostname.fqdn/config -r /opt/root 18225 18224 0 13:17:25 ? 0:00 webservd -d /opt/oracle/webserver7/https-hostname.fqdn/config -r /opt/ (For Oracle Traffic Director look for process named "trafficd") We see that the child process id is “18225” 12. Clients for testing : You can use any browser. I used NSS tool tstclnt for testing $cat > req.txtGET /index.html HTTP/1.0 For checking both RSA and AES, I used cipher “:0035” which is TLS_RSA_WITH_AES_256_CBC_SHA $./tstclnt -h hostname -p 80 -d . -T -f -o -v -c “:0035” < req.txt 13. How do I make sure that crypto accelerator is being used 13.1 Create DTrace script The following D script should be able to uncover whether T4-specific crypto routine are being called or not. It also displays stats per second. # cat > t4crypto.d#!/usr/sbin/dtrace -spid$target::*rsa*:entry,pid$target::*yf*:entry{    @ops[probemod, probefunc] = count();}tick-1sec{    printa(@ops);    trunc(@ops);} Invoke with './t4crypto.d -p <pid> ' 13.2 EXPECTED PROBES FOR Solaris 10 : If offloading to T4 HW are correctly set up, the expected DTrace output would have these probes and libraries library Operations PROBES pkcs11_softtoken_extra.so RSA soft_decrypt_rsa_pkcs_decode, soft_encrypt_rsa_pkcs_encode soft_rsa_crypt_init_common soft_rsa_decrypt, soft_rsa_encrypt soft_rsa_decrypt_common, soft_rsa_encrypt_common AES yf_aes_instructions_present yf_aes_expand256, yf_aes256_cbc_decrypt, yf_aes256_cbc_encrypt, yf_aes256_load_keys_for_decrypt, yf_aes256_load_keys_for_encrypt, Note that these are for 256, same for 128, 192... these are for cbc, same for ecb, ctr, cfb128... DES yf_des_expand, yf_des_instructions_present yf_des_encrypt libmd_psr.so MD5 yf_md5_multiblock, yf_md5_instruction_present SHA1 yf_sha1_instruction_present, yf_sha1_multibloc 13.3 SAMPLE OUTPUT FOR CIPHER TLS_RSA_WITH_AES_256_CBC_SHA (0x0035) ON T4 SPARC SOLARIS 10 WITHOUT PKCS#11 BYPASS # ./t4crypto.d -p 18225 pkcs11_softtoken_extra.so.1   soft_decrypt_rsa_pkcs_decode    1 pkcs11_softtoken_extra.so.1   soft_rsa_crypt_init_common      1 pkcs11_softtoken_extra.so.1   soft_rsa_decrypt                1 pkcs11_softtoken_extra.so.1   big_mp_mul_yf                   2 pkcs11_softtoken_extra.so.1   mpm_yf_mpmul                    2 pkcs11_softtoken_extra.so.1   mpmul_arr_yf                    2 pkcs11_softtoken_extra.so.1   rijndael_key_setup_enc_yf       2 pkcs11_softtoken_extra.so.1   soft_rsa_decrypt_common         2 pkcs11_softtoken_extra.so.1   yf_aes_expand256                2 pkcs11_softtoken_extra.so.1   yf_aes256_cbc_decrypt           3 pkcs11_softtoken_extra.so.1   yf_aes256_load_keys_for_decrypt 3 pkcs11_softtoken_extra.so.1   big_mont_mul_yf                 6 pkcs11_softtoken_extra.so.1   mm_yf_montmul                   6 pkcs11_softtoken_extra.so.1   yf_des_instructions_present     6 pkcs11_softtoken_extra.so.1   yf_aes256_cbc_encrypt           8 pkcs11_softtoken_extra.so.1   yf_aes256_load_keys_for_encrypt 8 pkcs11_softtoken_extra.so.1   yf_mpmul_present                8 pkcs11_softtoken_extra.so.1   yf_aes_instructions_present    13 pkcs11_softtoken_extra.so.1   yf_des_encrypt                 18 libmd_psr.so.1                yf_md5_multiblock              41 libmd_psr.so.1                yf_md5_instruction_present     72 libmd_psr.so.1                yf_sha1_instruction_present    82 libmd_psr.so.1                yf_sha1_multiblock             82 This indicates that both RSA and AES ops are done in Solaris Crypto Framework. 13.4 SAMPLE OUTPUT FOR CIPHER TLS_RSA_WITH_AES_256_CBC_SHA (0x0035) ON T4 SPARC SOLARIS 10 WITH PKCS#11 BYPASS # ./t4crypto.d -p 18225 pkcs11_softtoken_extra.so.1   soft_decrypt_rsa_pkcs_decode 1 pkcs11_softtoken_extra.so.1   soft_rsa_crypt_init_common   1 pkcs11_softtoken_extra.so.1   soft_rsa_decrypt             1 pkcs11_softtoken_extra.so.1   soft_rsa_decrypt_common      1 pkcs11_softtoken_extra.so.1   big_mp_mul_yf                2 pkcs11_softtoken_extra.so.1   mpm_yf_mpmul                 2 pkcs11_softtoken_extra.so.1   mpmul_arr_yf                 2 pkcs11_softtoken_extra.so.1   big_mont_mul_yf              6 pkcs11_softtoken_extra.so.1   mm_yf_montmul                6 pkcs11_softtoken_extra.so.1   yf_mpmul_present             8 For this cipher, when I enable PKCS#11 bypass, Only RSA probes are being hit AES probes are not being hit. 13.5 ustack() for RSA operations / probefunc == "soft_rsa_decrypt" / Shows that libnss3.so is calling C_* functions of libpkcs11.so which is calling functions of pkcs11_softtoken_extra.so for both cases with and without bypass. When PKCS#11 bypass is disabled (allow-bypass is 0) pkcs11_softtoken_extra.so.1`soft_rsa_decrypt pkcs11_softtoken_extra.so.1`soft_rsa_decrypt_common+0x94 pkcs11_softtoken_extra.so.1`soft_unwrapkey+0x258 pkcs11_softtoken_extra.so.1`C_UnwrapKey+0x1ec libpkcs11.so.1`meta_unwrap_key+0x17c libpkcs11.so.1`meta_UnwrapKey+0xc4 libpkcs11.so.1`C_UnwrapKey+0xfc libnss3.so`pk11_AnyUnwrapKey+0x6b8 libnss3.so`PK11_PubUnwrapSymKey+0x8c libssl3.so`ssl3_HandleRSAClientKeyExchange+0x1a0 libssl3.so`ssl3_HandleClientKeyExchange+0x154 libssl3.so`ssl3_HandleHandshakeMessage+0x440 libssl3.so`ssl3_HandleHandshake+0x11c libssl3.so`ssl3_HandleRecord+0x5e8 libssl3.so`ssl3_GatherCompleteHandshake+0x5c libssl3.so`ssl_GatherRecord1stHandshake+0x30 libssl3.so`ssl_Do1stHandshake+0xec libssl3.so`ssl_SecureRecv+0x1c8 libssl3.so`ssl_Recv+0x9c libns-httpd40.so`__1cNDaemonSessionDrun6M_v_+0x2dc When PKCS#11 bypass is enabled (allow-bypass is 1) pkcs11_softtoken_extra.so.1`soft_rsa_decrypt pkcs11_softtoken_extra.so.1`soft_rsa_decrypt_common+0x94 pkcs11_softtoken_extra.so.1`C_Decrypt+0x164 libpkcs11.so.1`meta_do_operation+0x27c libpkcs11.so.1`meta_Decrypt+0x4c libpkcs11.so.1`C_Decrypt+0xcc libnss3.so`PK11_PrivDecryptPKCS1+0x1ac libssl3.so`ssl3_HandleRSAClientKeyExchange+0xe4 libssl3.so`ssl3_HandleClientKeyExchange+0x154 libssl3.so`ssl3_HandleHandshakeMessage+0x440 libssl3.so`ssl3_HandleHandshake+0x11c libssl3.so`ssl3_HandleRecord+0x5e8 libssl3.so`ssl3_GatherCompleteHandshake+0x5c libssl3.so`ssl_GatherRecord1stHandshake+0x30 libssl3.so`ssl_Do1stHandshake+0xec libssl3.so`ssl_SecureRecv+0x1c8 libssl3.so`ssl_Recv+0x9c libns-httpd40.so`__1cNDaemonSessionDrun6M_v_+0x2dc libnsprwrap.so`ThreadMain+0x1c libnspr4.so`_pt_root+0xe8 13.6 ustack() FOR AES operations / probefunc == "yf_aes256_cbc_encrypt" / When PKCS#11 bypass is disabled (allow-bypass is 0) pkcs11_softtoken_extra.so.1`yf_aes256_cbc_encrypt pkcs11_softtoken_extra.so.1`aes_block_process_contiguous_whole_blocks+0xb4 pkcs11_softtoken_extra.so.1`aes_crypt_contiguous_blocks+0x1cc pkcs11_softtoken_extra.so.1`soft_aes_encrypt_common+0x22c pkcs11_softtoken_extra.so.1`C_EncryptUpdate+0x10c libpkcs11.so.1`meta_do_operation+0x1fc libpkcs11.so.1`meta_EncryptUpdate+0x4c libpkcs11.so.1`C_EncryptUpdate+0xcc libnss3.so`PK11_CipherOp+0x1a0 libssl3.so`ssl3_CompressMACEncryptRecord+0x264 libssl3.so`ssl3_SendRecord+0x300 libssl3.so`ssl3_FlushHandshake+0x54 libssl3.so`ssl3_SendFinished+0x1fc libssl3.so`ssl3_HandleFinished+0x314 libssl3.so`ssl3_HandleHandshakeMessage+0x4ac libssl3.so`ssl3_HandleHandshake+0x11c libssl3.so`ssl3_HandleRecord+0x5e8 libssl3.so`ssl3_GatherCompleteHandshake+0x5c libssl3.so`ssl_GatherRecord1stHandshake+0x30 libssl3.so`ssl_Do1stHandshake+0xec Shows that libnss3.so is calling C_* functions of libpkcs11.so which is calling functions of pkcs11_softtoken_extra.so However when PKCS#11 bypass is disabled (allow-bypass is 1) this stack isn't getting called. 14. LIST OF ALL THE PROBES MATCHED BY D SCRIPT FOR REFERENCE # ./t4crypto.d -p 18225 -l ID PROVIDER MODULE FUNCTION NAME ... 55720 pid18225 libmd_psr.so.1 yf_md5_instruction_present entry 55721 pid18225 libmd_psr.so.1 yf_sha256_instruction_present entry 55722 pid18225 libmd_psr.so.1 yf_sha512_instruction_present entry 55723 pid18225 libmd_psr.so.1 yf_sha1_instruction_present entry 55724 pid18225 libmd_psr.so.1 yf_sha256 entry 55725 pid18225 libmd_psr.so.1 yf_sha256_multiblock entry 55726 pid18225 libmd_psr.so.1 yf_sha512 entry 55727 pid18225 libmd_psr.so.1 yf_sha512_multiblock entry 55728 pid18225 libmd_psr.so.1 yf_sha1 entry 55729 pid18225 libmd_psr.so.1 yf_sha1_multiblock entry 55730 pid18225 libmd_psr.so.1 yf_md5 entry 55731 pid18225 libmd_psr.so.1 yf_md5_multiblock entry 55732 pid18225 pkcs11_softtoken_extra.so.1 yf_aes_instructions_present entry 55733 pid18225 pkcs11_softtoken_extra.so.1 rijndael_key_setup_enc_yf entry 55734 pid18225 pkcs11_softtoken_extra.so.1 yf_aes_expand128 entry 55735 pid18225 pkcs11_softtoken_extra.so.1 yf_aes_encrypt128 entry 55736 pid18225 pkcs11_softtoken_extra.so.1 yf_aes_decrypt128 entry 55737 pid18225 pkcs11_softtoken_extra.so.1 yf_aes_expand192 entry 55738 pid18225 pkcs11_softtoken_extra.so.1 yf_aes_encrypt192 entry 55739 pid18225 pkcs11_softtoken_extra.so.1 yf_aes_decrypt192 entry 55740 pid18225 pkcs11_softtoken_extra.so.1 yf_aes_expand256 entry 55741 pid18225 pkcs11_softtoken_extra.so.1 yf_aes_encrypt256 entry 55742 pid18225 pkcs11_softtoken_extra.so.1 yf_aes_decrypt256 entry 55743 pid18225 pkcs11_softtoken_extra.so.1 yf_aes128_load_keys_for_encrypt entry 55744 pid18225 pkcs11_softtoken_extra.so.1 yf_aes192_load_keys_for_encrypt entry 55745 pid18225 pkcs11_softtoken_extra.so.1 yf_aes256_load_keys_for_encrypt entry 55746 pid18225 pkcs11_softtoken_extra.so.1 yf_aes128_ecb_encrypt entry 55747 pid18225 pkcs11_softtoken_extra.so.1 yf_aes192_ecb_encrypt entry 55748 pid18225 pkcs11_softtoken_extra.so.1 yf_aes256_ecb_encrypt entry 55749 pid18225 pkcs11_softtoken_extra.so.1 yf_aes128_cbc_encrypt entry 55750 pid18225 pkcs11_softtoken_extra.so.1 yf_aes192_cbc_encrypt entry 55751 pid18225 pkcs11_softtoken_extra.so.1 yf_aes256_cbc_encrypt entry 55752 pid18225 pkcs11_softtoken_extra.so.1 yf_aes128_ctr_crypt entry 55753 pid18225 pkcs11_softtoken_extra.so.1 yf_aes192_ctr_crypt entry 55754 pid18225 pkcs11_softtoken_extra.so.1 yf_aes256_ctr_crypt entry 55755 pid18225 pkcs11_softtoken_extra.so.1 yf_aes128_cfb128_encrypt entry 55756 pid18225 pkcs11_softtoken_extra.so.1 yf_aes192_cfb128_encrypt entry 55757 pid18225 pkcs11_softtoken_extra.so.1 yf_aes256_cfb128_encrypt entry 55758 pid18225 pkcs11_softtoken_extra.so.1 yf_aes128_load_keys_for_decrypt entry 55759 pid18225 pkcs11_softtoken_extra.so.1 yf_aes192_load_keys_for_decrypt entry 55760 pid18225 pkcs11_softtoken_extra.so.1 yf_aes256_load_keys_for_decrypt entry 55761 pid18225 pkcs11_softtoken_extra.so.1 yf_aes128_ecb_decrypt entry 55762 pid18225 pkcs11_softtoken_extra.so.1 yf_aes192_ecb_decrypt entry 55763 pid18225 pkcs11_softtoken_extra.so.1 yf_aes256_ecb_decrypt entry 55764 pid18225 pkcs11_softtoken_extra.so.1 yf_aes128_cbc_decrypt entry 55765 pid18225 pkcs11_softtoken_extra.so.1 yf_aes192_cbc_decrypt entry 55766 pid18225 pkcs11_softtoken_extra.so.1 yf_aes256_cbc_decrypt entry 55767 pid18225 pkcs11_softtoken_extra.so.1 yf_aes128_cfb128_decrypt entry 55768 pid18225 pkcs11_softtoken_extra.so.1 yf_aes192_cfb128_decrypt entry 55769 pid18225 pkcs11_softtoken_extra.so.1 yf_aes256_cfb128_decrypt entry 55771 pid18225 pkcs11_softtoken_extra.so.1 yf_des_instructions_present entry 55772 pid18225 pkcs11_softtoken_extra.so.1 yf_des_expand entry 55773 pid18225 pkcs11_softtoken_extra.so.1 yf_des_encrypt entry 55774 pid18225 pkcs11_softtoken_extra.so.1 yf_mpmul_present entry 55775 pid18225 pkcs11_softtoken_extra.so.1 yf_montmul_present entry 55776 pid18225 pkcs11_softtoken_extra.so.1 mm_yf_montmul entry 55777 pid18225 pkcs11_softtoken_extra.so.1 mm_yf_montsqr entry 55778 pid18225 pkcs11_softtoken_extra.so.1 mm_yf_restore_func entry 55779 pid18225 pkcs11_softtoken_extra.so.1 mm_yf_ret_from_mont_func entry 55780 pid18225 pkcs11_softtoken_extra.so.1 mm_yf_execute_slp entry 55781 pid18225 pkcs11_softtoken_extra.so.1 big_modexp_ncp_yf entry 55782 pid18225 pkcs11_softtoken_extra.so.1 big_mont_mul_yf entry 55783 pid18225 pkcs11_softtoken_extra.so.1 mpmul_arr_yf entry 55784 pid18225 pkcs11_softtoken_extra.so.1 big_mp_mul_yf entry 55785 pid18225 pkcs11_softtoken_extra.so.1 mpm_yf_mpmul entry 55786 pid18225 libns-httpd40.so nsapi_rsa_set_priv_fn entry ... 55795 pid18225 libnss3.so prepare_rsa_priv_key_export_for_asn1 entry 55796 pid18225 libresolv.so.2 sunw_dst_rsaref_init entry 55797 pid18225 libnssutil3.so NSS_Get_SEC_UniversalStringTemplate entry ... 55813 pid18225 libsoftokn3.so prepare_low_rsa_priv_key_for_asn1 entry 55814 pid18225 libsoftokn3.so rsa_FormatOneBlock entry 55815 pid18225 libsoftokn3.so rsa_FormatBlock entry 55816 pid18225 libnssdbm3.so lg_prepare_low_rsa_priv_key_for_asn1 entry 55817 pid18225 libfreebl_32fpu_3.so rsa_build_from_primes entry 55818 pid18225 libfreebl_32fpu_3.so rsa_is_prime entry 55819 pid18225 libfreebl_32fpu_3.so rsa_get_primes_from_exponents entry 55820 pid18225 libfreebl_32fpu_3.so rsa_PrivateKeyOpNoCRT entry 55821 pid18225 libfreebl_32fpu_3.so rsa_PrivateKeyOpCRTNoCheck entry 55822 pid18225 libfreebl_32fpu_3.so rsa_PrivateKeyOpCRTCheckedPubKey entry 55823 pid18225 pkcs11_kernel.so.1 key_gen_rsa_by_value entry 55824 pid18225 pkcs11_kernel.so.1 get_rsa_private_key entry 55825 pid18225 pkcs11_kernel.so.1 get_rsa_public_key entry 55826 pid18225 pkcs11_softtoken_extra.so.1 soft_rsa_encrypt entry 55827 pid18225 pkcs11_softtoken_extra.so.1 soft_rsa_decrypt entry 55828 pid18225 pkcs11_softtoken_extra.so.1 soft_rsa_crypt_init_common entry 55829 pid18225 pkcs11_softtoken_extra.so.1 soft_rsa_encrypt_common entry 55830 pid18225 pkcs11_softtoken_extra.so.1 soft_rsa_decrypt_common entry 55831 pid18225 pkcs11_softtoken_extra.so.1 soft_rsa_sign_verify_init_common entry 55832 pid18225 pkcs11_softtoken_extra.so.1 soft_rsa_sign_common entry 55833 pid18225 pkcs11_softtoken_extra.so.1 soft_rsa_verify_common entry 55834 pid18225 pkcs11_softtoken_extra.so.1 generate_rsa_key entry 55835 pid18225 pkcs11_softtoken_extra.so.1 soft_rsa_genkey_pair entry 55836 pid18225 pkcs11_softtoken_extra.so.1 get_rsa_sha1_prefix entry 55837 pid18225 pkcs11_softtoken_extra.so.1 soft_rsa_digest_sign_common entry 55838 pid18225 pkcs11_softtoken_extra.so.1 soft_rsa_digest_verify_common entry 55839 pid18225 pkcs11_softtoken_extra.so.1 soft_rsa_verify_recover entry 55840 pid18225 pkcs11_softtoken_extra.so.1 rsa_pri_to_asn1 entry 55841 pid18225 pkcs11_softtoken_extra.so.1 asn1_to_rsa_pri entry 55842 pid18225 pkcs11_softtoken_extra.so.1 soft_encrypt_rsa_pkcs_encode entry 55843 pid18225 pkcs11_softtoken_extra.so.1 soft_decrypt_rsa_pkcs_decode entry 55844 pid18225 pkcs11_softtoken_extra.so.1 soft_sign_rsa_pkcs_encode entry 55845 pid18225 pkcs11_softtoken_extra.so.1 soft_verify_rsa_pkcs_decode entry 55770 profile tick-1sec

    Read the article

  • Upgrading to Oracle Enterprise Manager 12c Release 2: Top Tips One Must Know

    - by AnkurGupta
    Recently Oracle announced incremental release of Enterprise Manager 12c called Enterprise Manager 12c Release 2 (EM12c R2) which includes several new exciting features (Press announcement). Right before the official release, we upgraded an internal production site from EM 12c R1 to EM 12c R2 and had an extremely pleasant experience. Let me share few key takeaways as well as few tips from this upgrade exercise. I - Why Should You Upgrade To Enterprise Manager 12c Release 2 While an upgrade is usually recommended primarily to take benefit of the latest features (which is valid for this upgrade as well), I found several other compelling reasons purely from deployment perspective. Standardize your EM deployment:  Enterprise Manager comprises of several different components (OMS, agents, plug-ins, etc) and it might be possible that these are at varied patch levels in your environment. For instance, in case of an environment containing Bundle Patch 1 (customer announcement), there is a good chance that you may not have all the components up-to-date. There are two possible reasons. Bundle Patch 1 involved patching different components (OMS, agents, plug-ins) with multiple one-off patches which may not have been applied to all components yet. Bundle Patch 1 for different platforms were not released together. Which means you may not have got the chance to patch all the components on different platforms. Note: BP1 patches are not mandatory to upgrade to EM12c R2 release EM 12c R2 provides an excellent opportunity to standardize your Cloud Control environment (OMS, repository and agents) and plug-ins to latest versions in single shot. All platform releases are made available simultaneously: For the very first time in the history of EM release, all the platforms were released on day one itself, which means you do not need to wait for platform specific binaries for EM OMS or Agent to perform install or upgrades in a heterogeneous environment. Highly refined and automated process – Upgrade process is by far the smoothest and the cleanest as compared to previous releases of Enterprise manager. Following are the ones that stand out. Automatic Plug-in management – Plug-in upgrade along with new plug-in deployment is supported in upgrade installer wizard which means bulk of the updates to OMS and repository can be done in the same workflow. Saves time and minimizes user inputs. Plug-in Upgrade or Migrate Auto Update: While doing the OMS and repository upgrade, you can use Auto Update screen in Oracle Universal Installer to check for any updates/patches. That will help you to avoid the know issues and will make sure that your upgrade is successful. Allows mass upgrade of EM Agents – A new dedicated menu has been added in the EM console for agent upgrade. Agent upgrade workflow is extremely simple that requires agent name as the only input. ADM / JVMD Manager/Agent upgrade – complete process is supported via UI screens. EM12c R2 Upgrade Guide is much simpler to follow as compared to those for earlier releases. This is attributed to the simpler upgrade process. Robust and Performing Platform: EM12c R2 release not only includes several new features, but also provides a more stable platform which incorporates several fixes and enhancements in the Enterprise Manager framework. II - Few Tips To Remember In my last post (blog link) I shared few tips and tricks from my experience applying the Bundle Patch. Recently I upgraded the same site to EM 12c R2 and found few points that you must take note of, while planning this upgrade. The tips below are also applicable to EM 12c R1 environments that do not have Bundle Patch 1 patches applied. Verify the monitored application certification – Specific targets like E-Business Suite have not yet been certified as managed target in EM 12c R2. Therefore make sure to recheck the Enterprise Manager certification Matrix on My Oracle Support before planning the upgrade. Plan downtime – Because EM 12c R2 is an incremental release of EM 12c, for EM 12c R1 to EM 12c R2 upgrade supports only 1-system upgrade approach, which mean there will be downtime. OMS name change after upgrade – In case of multi OMS environments, additional OMS is renamed after upgrade, which has few implications when you upgrade JVMD and ADP agents on OMS. This is well documented in upgrade guide but make sure you read through all the notes. Upgrading BI Publisher– EM12c R2 is certified with BI Publisher 11.1.1.6.0 only. Therefore in case you are using EM 12c R1 which is integrated with BI Publisher 11.1.1.5.0, you must upgrade the BI Publisher to 11.1.1.6.0. Follow the steps from Advanced Installation and Configuration Guide here. Perform Post upgrade Tasks – Make sure to perform post upgrade steps mentioned in documentation here. These include critical changes that must be done right after upgrade to get the right configuration. For instance Database plug-in should be upgraded to Revision 3 (12.1.0.2.0 [u120804]). Delete old OMS Home – EM12c R1 to EM12c R2 is an out of place upgrade, which means it creates a new oracle home for OMS, plug-ins, etc. Therefore please ensure that You have sufficient extra space for new OMS before starting the upgrade process. You clean up the old OMS home after the upgrade process. Steps are available here. DO NOT remove the agent home on OMS host, because agent is upgraded in-place. If you have standby OMS setup then do look into the steps to upgrade the standby OMS from the upgrade guide before going ahead. Read the right documentation – Make sure to follow the Upgrade guide which provides the most comprehensive information on EM12c R2 upgrade process. Additionally you can refer other resources to get familiar with upgrade concepts. Recorded webcast - Oracle Enterprise Manager Cloud Control 12c Release 2 Installation and Upgrade Overview Presentation - Understanding Enterprise Manager 12.1.0.2 Upgrade We are very excited about this latest release and will look forward to hear back any feedback from your upgrade experience!

    Read the article

  • MAXDOP in SQL Azure

    - by Herve Roggero
    In my search of better understanding the scalability options of SQL Azure I stumbled on an interesting aspect: Query Hints in SQL Azure. More specifically, the MAXDOP hint. A few years ago I did a lot of analysis on this query hint (see article on SQL Server Central:  http://www.sqlservercentral.com/articles/Configuring/managingmaxdegreeofparallelism/1029/).  Here is a quick synopsis of MAXDOP: It is a query hint you use when issuing a SQL statement that provides you control with how many processors SQL Server will use to execute the query. For complex queries with lots of I/O requirements, more CPUs can mean faster parallel searches. However the impact can be drastic on other running threads/processes. If your query takes all available processors at 100% for 5 minutes... guess what... nothing else works. The bottom line is that more is not always better. The use of MAXDOP is more art than science... and a whole lot of testing; it depends on two things: the underlying hardware architecture and the application design. So there isn't a magic number that will work for everyone... except 1... :) Let me explain. The rules of engagements are different. SQL Azure is about sharing. Yep... you are forced to nice with your neighbors.  To achieve this goal SQL Azure sets the MAXDOP to 1 by default, and ignores the use of the MAXDOP hint altogether. That means that all you queries will use one and only one processor.  It really isn't such a bad thing however. Keep in mind that in some of the largest SQL Server implementations MAXDOP is usually also set to 1. It is a well known configuration setting for large scale implementations. The reason is precisely to prevent rogue statements (like a SELECT * FROM HISTORY) from bringing down your systems (like a report that should have been running on a different in the first place) and to avoid the overhead generated by executing too many parallel queries that could cause internal memory management nightmares to the host Operating System. Is summary, forcing the MAXDOP to 1 in SQL Azure makes sense; it ensures that your database will continue to function normally even if one of the other tenants on the same server is running massive queries that would otherwise bring you down. Last but not least, keep in mind as well that when you test your database code for performance on-premise, make sure to set the DOP to 1 on your SQL Server databases to simulate SQL Azure conditions.

    Read the article

  • OBIEE 11.1.1 - How to configure HTTP compression / caching on Oracle BI Mobile app

    - by Ahmed Awan
     Applies to: OBIEE 11.1.1.5 Supported Physical Devices and OS: The Oracle BI Mobile application with HTTP compression / caching configurations is tested on following devices: iPhone 4S, 4, 3GS. iPad 2 and 1. Note these devices must be running the latest version of the iOS version, i.e. iOS 4.2.1 / iOS 5 is also supported. Configuring Pre-requisites: Prior to configuration, the Oracle Web tier software must be installed on server, as described in product documentation i.e. Enterprise Deployment Guide for Oracle Business Intelligence in Section 3.2, "Installing Oracle HTTP Server." The steps for configuring the compression and caching on Oracle HTTP Server are described in this PA blog at http://blogs.oracle.com/pa/entry/obiee_11g_user_interface_ui and in support Doc ID 1312299.1. Configuration Steps in Oracle BI Mobile application: 1. Download the BI Mobile app from the Apple iTunes App Store. The link is http://itunes.apple.com/us/app/oracle-business-intelligence/id434559909?mt=8 . 2. Add Server for example http://pew801.us.oracle.com:7777/analytics/ , here is how your “Server Setting” screen should look like on your OBI Mobile app:                                 Performance Gain Test (using Oracle® HTTP Server with OBIEE) The test with/without HTTP compression / caching was conducted on iPhone 4S / iPad 2 to measure the throughput (i.e. total bytes received) for Oracle® Business Intelligence Enterprise Edition. Below table shows the throughput comparison before and after using HTTP compression / caching for SampleApp using “QuickStart” dashboard accessing reports i.e. Overview, Details, Published Reporting and Scorecard. Testing shows that total bytes received were reduced from 2.3 MB to 723 KB. a. Test Results > Without HTTP Compression / Caching setting - Total Throughput (in Bytes) captured below: Total Bytes Statistics:        b. Test Results > With HTTP Compression / Caching settings - Total Throughput (in Bytes) captured below: Total Bytes Statistics:      

    Read the article

  • Migrating SQL Server Databases – The DBA’s Checklist (Part 2)

    - by Sadequl Hussain
    Continuing from Part 1  , our Migration Checklist continues: Step 5: Update statistics It is always a good idea to update the statistics of the database that you have just installed or migrated. To do this, run the following command against the target database: sp_updatestats The sp_updatestats system stored procedure runs the UPDATE STATISTICS command against every user and system table in the database.  However, a word of caution: running the sp_updatestats against a database with a compatibility level below 90 (SQL Server 2005) will reset the automatic UPDATE STATISTICS settings for every index and statistics of every table in the database. You may therefore want to change the compatibility mode before you run the command. Another thing you should remember to do is to ensure the new database has its AUTO_CREATE_STATISTICS and AUTO_UPDATE_STATISTICS properties set to ON. You can do so using the ALTER DATABASE command or from the SSMS. Step 6: Set database options You may have to change the state of a database after it has been restored. If the database was changed to single-user or read-only mode before backup, the restored copy will also retain these settings. This may not be an issue when you are manually restoring from Enterprise Manager or the Management Studio since you can change the properties. However, this is something to be mindful of if the restore process is invoked by an automated job or script and the database needs to be written to immediately after restore. You may want to check the database’s status programmatically in such cases. Another important option you may want to set for the newly restored / attached database is PAGE_VERIFY. This option specifies how you want SQL Server to ensure the physical integrity of the data. It is a new option from SQL Server 2005 and can have three values: CHECKSUM (default for SQL Server 2005 and latter databases), TORN_PAGE_DETECTION (default when restoring a pre-SQL Server 2005 database) or NONE. Torn page detection was itself an option for SQL Server 2000 databases. From SQL Server 2005, when PAGE_VERIFY is set to CHECKSUM, the database engine calculates the checksum for a page’s contents and writes it to the page header before storing it in disk. When the page is read from the disk, the checksum is computed again and compared with the checksum stored in the header.  Torn page detection works much like the same way in that it stores a bit in the page header for every 512 byte sector. When data is read from the page, the torn page bits stored in the header is compared with the respective sector contents. When PAGE_VERIFY is set to NONE, SQL Server does not perform any checking, even if torn page data or checksums are present in the page header.  This may not be something you would want to set unless there is a very specific reason.  Microsoft suggests using the CHECKSUM page verify option as this offers more protection. Step 7: Map database users to logins A common database migration issue is related to user access. Windows and SQL Server native logins that existed in the source instance and had access to the database may not be present in the destination. Even if the logins exist in the destination, the mapping between the user accounts and the logins will not be automatic. You can use a special system stored procedure called sp_change_users_login to address these situations. The procedure needs to be run against the newly attached or restored database and can accept four parameters. Depending on what you want to do, you may be using less than four though. The first parameter, @Action, can take three values. When you specify @Action = ‘Report’, the system will provide you with a list of database users which are not mapped to any login. If you want to map a database user to an existing SQL Server login, the value for @Action will be ‘Update_One’. In this case, you will only need to provide the database user name and the login it will map to. So if your newly restored database has a user account called “bob” and there is already a SQL Server login with the same name and you want to map the user to the login, you will execute a query like the following: sp_change_users_login         @Action = ‘Update_One’,         @UserNamePattern = ‘bob’,         @LoginName = ‘bob’ If the login does not exist, you can instruct SQL Server to create the login with the same name. In this case you will need to provide a password for the login and the value of the @Action parameter will be ‘Auto_Fix’. If the login already exists, it will be automatically mapped to the user account. Unfortunately sp_change_users_login system stored procedure cannot be used to map database users to trusted logins (Windows accounts) in SQL Server. You will need to follow a manual process to re-map the database user accounts.  Continues…

    Read the article

  • Pixel Perfect Collision Detection in Cocos2dx

    - by Happybirthday
    I am trying to port the pixel perfect collision detection in Cocos2d-x the original version was made for Cocos2D and can be found here: http://www.cocos2d-iphone.org/forums/topic/pixel-perfect-collision-detection-using-color-blending/ Here is my code for the Cocos2d-x version bool CollisionDetection::areTheSpritesColliding(cocos2d::CCSprite *spr1, cocos2d::CCSprite *spr2, bool pp, CCRenderTexture* _rt) { bool isColliding = false; CCRect intersection; CCRect r1 = spr1-boundingBox(); CCRect r2 = spr2-boundingBox(); intersection = CCRectMake(fmax(r1.getMinX(),r2.getMinX()), fmax( r1.getMinY(), r2.getMinY()) ,0,0); intersection.size.width = fmin(r1.getMaxX(), r2.getMaxX() - intersection.getMinX()); intersection.size.height = fmin(r1.getMaxY(), r2.getMaxY() - intersection.getMinY()); // Look for simple bounding box collision if ( (intersection.size.width0) && (intersection.size.height0) ) { // If we're not checking for pixel perfect collisions, return true if (!pp) { return true; } unsigned int x = intersection.origin.x; unsigned int y = intersection.origin.y; unsigned int w = intersection.size.width; unsigned int h = intersection.size.height; unsigned int numPixels = w * h; //CCLog("Intersection X and Y %d, %d", x, y); //CCLog("Number of pixels %d", numPixels); // Draw into the RenderTexture _rt-beginWithClear( 0, 0, 0, 0); // Render both sprites: first one in RED and second one in GREEN glColorMask(1, 0, 0, 1); spr1-visit(); glColorMask(0, 1, 0, 1); spr2-visit(); glColorMask(1, 1, 1, 1); // Get color values of intersection area ccColor4B *buffer = (ccColor4B *)malloc( sizeof(ccColor4B) * numPixels ); glReadPixels(x, y, w, h, GL_RGBA, GL_UNSIGNED_BYTE, buffer); _rt-end(); // Read buffer unsigned int step = 1; for(unsigned int i=0; i 0 && color.g 0) { isColliding = true; break; } } // Free buffer memory free(buffer); } return isColliding; } My code is working perfectly if I send the "pp" parameter as false. That is if I do only a bounding box collision but I am not able to get it working correctly for the case when I need Pixel Perfect collision. I think the opengl masking code is not working as I intended. Here is the code for "_rt" _rt = CCRenderTexture::create(visibleSize.width, visibleSize.height); _rt-setPosition(ccp(origin.x + visibleSize.width * 0.5f, origin.y + visibleSize.height * 0.5f)); this-addChild(_rt, 1000000); _rt-setVisible(true); //For testing I think I am making a mistake with the implementation of this CCRenderTexture Can anyone guide me with what I am doing wrong ? Thank you for your time :)

    Read the article

< Previous Page | 335 336 337 338 339 340 341 342 343 344 345 346  | Next Page >