Search Results

Search found 7266 results on 291 pages for 'mx record'.

Page 182/291 | < Previous Page | 178 179 180 181 182 183 184 185 186 187 188 189  | Next Page >

  • Employee Info Starter Kit - Visual Studio 2010 and .NET 4.0 Version (4.0.0) Available

    - by Mohammad Ashraful Alam
    Employee Info Starter Kit is a ASP.NET based web application, which includes very simple user requirements, where we can create, read, update and delete (crud) the employee info of a company. Based on just a database table, it explores and solves most of the major problems in web development architectural space.  This open source starter kit extensively uses major features available in latest Visual Studio, ASP.NET and Sql Server to make robust, scalable, secured and maintanable web applications quickly and easily. Since it's first release, this starter kit achieved a huge popularity in web developer community and includes 1,40,000+ download from project web site. Visual Studio 2010 and .NET 4.0 came up with lots of exciting features to make software developers life easier.  A new version (v4.0.0) of Employee Info Starter Kit is now available in both MSDN Code Gallery and CodePlex. Chckout the latest version of this starter kit to enjoy cool features available in Visual Studio 2010 and .NET 4.0. [ Release Notes ] Architectural Overview Simple 2 layer architecture (user interface and data access layer) with 1 optional cache layer ASP.NET Web Form based user interface Custom Entity Data Container implemented (with primitive C# types for data fields) Active Record Design Pattern based Data Access Layer, implemented in C# and Entity Framework 4.0 Sql Server Stored Procedure to perform actual CRUD operation Standard infrastructure (architecture, helper utility) for automated integration (bottom up manner) and unit testing Technology UtilizedProgramming Languages/Scripts Browser side: JavaScript Web server side: C# 4.0 Database server side: T-SQL .NET Framework Components .NET 4.0 Entity Framework .NET 4.0 Optional/Named Parameters .NET 4.0 Tuple .NET 3.0+ Extension Method .NET 3.0+ Lambda Expressions .NET 3.0+ Aanonymous Type .NET 3.0+ Query Expressions .NET 3.0+ Automatically Implemented Properties .NET 3.0+ LINQ .NET 2.0 + Partial Classes .NET 2.0 + Generic Type .NET 2.0 + Nullable Type   ASP.NET 3.5+ List View (TBD) ASP.NET 3.5+ Data Pager (TBD) ASP.NET 2.0+ Grid View ASP.NET 2.0+ Form View ASP.NET 2.0+ Skin ASP.NET 2.0+ Theme ASP.NET 2.0+ Master Page ASP.NET 2.0+ Object Data Source ASP.NET 1.0+ Role Based Security Visual Studio Features Visual Studio 2010 CodedUI Test Visual Studio 2010 Layer Diagram Visual Studio 2010 Sequence Diagram Visual Studio 2010 Directed Graph Visual Studio 2005+ Database Unit Test Visual Studio 2005+ Unit Test Visual Studio 2005+ Web Test Visual Studio 2005+ Load Test Sql Server Features Sql Server 2005 Stored Procedure Sql Server 2005 Xml type Sql Server 2005 Paging support

    Read the article

  • Employee Info Starter Kit - Visual Studio 2010 and .NET 4.0 Version (4.0.0) Available

    - by joycsharp
    Employee Info Starter Kit is a ASP.NET based web application, which includes very simple user requirements, where we can create, read, update and delete (crud) the employee info of a company. Based on just a database table, it explores and solves all major problems in web development architectural space.  This open source starter kit extensively uses major features available in latest Visual Studio, ASP.NET and Sql Server to make robust, scalable, secured and maintanable web applications quickly and easily. Since it's first release, this starter kit achieved a huge popularity in web developer community and includes 1,40,000+ download from project web site. Visual Studio 2010 and .NET 4.0 came up with lots of exciting features to make software developers life easier.  A new version (v4.0.0) of Employee Info Starter Kit is now available in both MSDN Code Gallery and CodePlex. Chckout the latest version of this starter kit to enjoy cool features available in Visual Studio 2010 and .NET 4.0. [ Release Notes ] Architectural Overview Simple 2 layer architecture (user interface and data access layer) with 1 optional cache layer ASP.NET Web Form based user interface Custom Entity Data Container implemented (with primitive C# types for data fields) Active Record Design Pattern based Data Access Layer, implemented in C# and Entity Framework 4.0 Sql Server Stored Procedure to perform actual CRUD operation Standard infrastructure (architecture, helper utility) for automated integration (bottom up manner) and unit testing Technology UtilizedProgramming Languages/Scripts Browser side: JavaScript Web server side: C# 4.0 Database server side: T-SQL .NET Framework Components .NET 4.0 Entity Framework .NET 4.0 Optional/Named Parameters .NET 4.0 Tuple .NET 3.0+ Extension Method .NET 3.0+ Lambda Expressions .NET 3.0+ Aanonymous Type .NET 3.0+ Query Expressions .NET 3.0+ Automatically Implemented Properties .NET 3.0+ LINQ .NET 2.0 + Partial Classes .NET 2.0 + Generic Type .NET 2.0 + Nullable Type   ASP.NET 3.5+ List View (TBD) ASP.NET 3.5+ Data Pager (TBD) ASP.NET 2.0+ Grid View ASP.NET 2.0+ Form View ASP.NET 2.0+ Skin ASP.NET 2.0+ Theme ASP.NET 2.0+ Master Page ASP.NET 2.0+ Object Data Source ASP.NET 1.0+ Role Based Security Visual Studio Features Visual Studio 2010 CodedUI Test Visual Studio 2010 Layer Diagram Visual Studio 2010 Sequence Diagram Visual Studio 2010 Directed Graph Visual Studio 2005+ Database Unit Test Visual Studio 2005+ Unit Test Visual Studio 2005+ Web Test Visual Studio 2005+ Load Test Sql Server Features Sql Server 2005 Stored Procedure Sql Server 2005 Xml type Sql Server 2005 Paging support

    Read the article

  • How to get ip-address out of SPAMHAUS blacklist?

    - by vgv8
    I frequently read that it is possible to remove individual ip-addresses from SPAMHAUS blacklisting. OK. Here is 91.205.43.252 (91.205.43.251 - 91.205.43.253) used by back3.stopspamers.com (back2.stopspamers.com, back1.stopspamers.com) in geo-cluster on dedicated servers in Switzerland. The queries: http://www.spamhaus.org/query/bl?ip=91.205.43.251 http://www.spamhaus.org/query/bl?ip=91.205.43.252 http://www.spamhaus.org/query/bl?ip=91.205.43.253 tell that: 91.205.43.251 - 91.205.43.253 are all listed in the SBL80808 blacklist And SBL80808 blacklist tells: "Ref: SBL80808 91.205.40.0/22 is listed on the Spamhaus Block List (SBL) 01-Apr-2010 05:52 GMT | SR04 Spamming and now seems this place is involved in other fraud" 91.205.43.251-91.205.43.253 are not listed amongst criminal ip-addresses individually but there is no way to remove it individually from black listing. How to remove this individual (91.205.43.251-91.205.43.253) addresses from SPAMHAUS blacklist? And why the heck SPAMHAUS is blacklisting spam-stopping service? This is only one example of a bunch. My related posts: Blacklist IP database Update: From the answer provided I realized that my question was not even understood. This ip-addresses 91.205.43.251 - 91.205.43.253 are not blacklisted individually, they are blacklisted through its supernet 91.205.40.0/22. Also note that dedicated server, ISP and customer are in much different distant countries. Update2: http://www.spamhaus.org/sbl/sbl.lasso?query=SBL80808#removal tells: "To have record SBL80808 (91.205.40.0/22) removed from the SBL, the Abuse/Security representative of RIPE (or the Internet Service Provider responsible for supplying connectivity to 91.205.40.0/22) needs to contact the SBL Team" There are dozens of "abusers" in that blacklist SBL80808. The company using that dedicated server is not an ISP or RIPE representative to treat these issues. Even if to treat it, it is just a matter of pressing "Report spam" on internet to be again blacklisted, this is fruitless approach. These techniques are broadly used by criminals and spammers, See also this my post on blacklisting. This is just one specific example but there are many-many more.

    Read the article

  • I'm Not Bi-Polar, I'm Bi-Winning

    - by David Dorf
    On March 1st, Charlie Sheen joined Twitter and was able to amass 1M followers in 25 hours and 17 minutes, setting an official world record.  So why does it take your brand so long to collect followers?  Easy: you're brand isn't a train wreck.Wouldn't it be great if your customers we chatting about your products as much as they're talking about Charlie #winning?  There are a couple things retailers can do.  First, you can offer check-ins to your customers, which can occasionally get a "ooh, what are you buying there?" in the social network. Another methods is to allow customer to "like" particular products on your Web site.  Companies like Wet Seal excel at that.We've been experimenting with automatic posting from the POS, assuming a customer has opted-in.  When you buy something in a store, the POS can automatically post "Dave just bought something at Wet Seal" to Facebook, Twitter, and Foursquare simultaneously.  We stopped short of mentioning the specific product so we don't pull a Beacon.  The idea is the same: get the conversation started.  Give customers a virtual water-cooler where they can discuss products and influence buying decisions.The guys over at ShopSocially have done something very similar.  On the Facebook page for Cafe Press, customers can claim purchases, effectively bragging on their walls.  Each posting goes through the Facebook newsfeed and gets friends interested.  They are seeing over 1,000 purchases being shared daily, and that's generating over 300,000 brand impressions.Sounds like a winning idea.

    Read the article

  • Who benefits from the use of Design Patterns?

    Who benefits from the use of design patterns is like asking who benefits from clean air or a good education. All of the stakeholders of a project benefit from the use of design patterns. Project Sponsor Project sponsors benefit from the use of design patterns because they promote reduced development time which translates in to shorter project timelines and greater return on investment compared to other projects that do not make use of design patterns. Project Manager Project managers benefit from the use of design patterns because they reduce the amount of time needed to design a system, and typically the sub components of the system already have a proven track record. System Architect/Engineer System architects/engineers benefit from the use of design patterns because reduce the amount of time needed to design the core a system. The additional time is used to alter the design pattern through the use of innovative design and common design principles to adhere to the project’s requirements. Programmer Programmers benefit from the use of design patterns because they can reuse existing code already established by the design pattern and only have to integrate the changes outlined by the system architects/engineers. Tester Testers benefit from the use of design patterns because they can alter the existing test established for the design pattern to take in to account the changes made by the system architects/engineers. User Users benefit from the use of design patterns because the software is typically delivered sooner than projects that do not incorporate the use of design patterns, and they are assumed that the system will work as designed because it was based on a system that was already proven to work properly.

    Read the article

  • Agile PLM on Developing Agile PLM: Software Lifecycle Management

    - by Kerrie Foy
    Change is constant.  That saying couldn’t be truer when applied to software development.   And with all that change comes extensive product complexity.  How do you manage it all?  As software developers ourselves, we can certainly empathize with the challenge. On April 3, 2012 Stephen Van Lare, VP of PLM Product Development, hosted a webcast to share how Oracle uses Agile to develop Agile – a PLM solution for managing a PLM solution!   Stephen passionately shared his unique insight based on 10 years of using Agile PLM to manage the development process, as well as customer use cases.  He shared our time-proven view of the software’s relationship to the product record, while pointing out that PLM is not source control.  He began with the challenges of software development, which boiled down to the deduction that “despite many great tools in the software development industry, it takes a lot more than good source control, more than good bug tracking, to get to an on-time, on-budget and quality release in your marketplace.   It requires defining the right things you want to do, managing the scope, managing your schedule, and, most importantly, managing the change to all those things over the lifecycle of the process. And this is the definition of PLM.”   Stephen then defined the relationship of PLM to the software development process by detailing the two main use cases –  Product Lifecycle and Mechatronics – which can be used simultaneously and in fact are already used in most industries today.  The Product Lifecycle use case is used to manage artifacts and change throughout product development, while the Mechatronics use case involves the software, hardware and electrical design in the BOM.  In essence, PLM is just as relevant to software as the rest of the BOM when trying to maximize profits during any phase of the lifecycle. Please take the opportunity to watch Stephen Van Lare as he details how and why based on his own experience developing Agile with Agile, as well as a lively Q&A session, in the Software PLM Webcast Replay.

    Read the article

  • Forum software advice needed

    - by David Thompson
    Hello All ... we want to migrate our sites current forum (proprietary built) to a newer, more modern (feature rich) platform. I've been looking around at the available options and have narrowed it down to vBulletin, Vanilla or Phorum (unless you have another suggestion ?). I hope someone here can give me some feedback on their experiences either migrating to a new forum or working deeply with one. The current forum we have has approx 2.2 million threads in it and is contained in a MySQL database. Data Migration is obviously the first issue, is one of the major Forum vendors better or worse in this regard ? The software needs to be able to be clustered and cached to ensure availability and performance. We want it to be PHP based and store it's data in MySQL. The code needs to be open to allow us to highly customise the software both to strip out a lot of stuff and be able to integrate our sites features. A lot of the forums I've looked at have a lot of duplicate features to our main site, in particular member management, profiles etc. I realise we'll have to do a good bit of development in removing these and tieing it all back to the main site so we want to find a platform that makes this kind of integration as easy as possible. Finally I guess if 'future proofing' the forum (as best as possible) given the above. Which platform will allow us to customise it but also allow us to keep instep with upgrades. Which forum software has the best track record for bringing online new features in a timely manner ? etc. etc. I know it's a big question but if anyone here has any experience in some or all of the above I'd be very grateful.

    Read the article

  • OS X Server DNS management

    - by Sorin Buturugeanu
    I have an OS X 10.6 Server running, which has PHP, Apache, MySQL, and DNS running on it. I want to take the DNS management out of the Server Admin App. I know that the DNS configuration files (the ones BIND uses) are plain text files (which have to obey some rules, obviously). The main reason for this is because I wanted to setup DKIM for one of my domains, and I had to add a TXT record to the subdomain pm._domainkey.example.com. Server Admin did not let me add that subdomain, because of the "invalid" underscore character. I searched for web based DNS management tools (the ones that I would install on my server and would allow me to manage my DNS records), but I couldn't find any good ones. (There were a couple that I managed to install, but they didn't see the configuration that I already had setup in Server Admin). Now I'm looking into editing the config files directly, but I don't know where they're located. This is a test / development server, so messing it up wouldn't be such a disaster. I know "I shouldn't do this", but I want to :). Thanks for your help.

    Read the article

  • Using Microsoft's Chart Controls In An ASP.NET Application: Serializing Chart Data

    In most usage scenarios, the data displayed in a Microsoft Chart control comes from some dynamic source, such as from a database query. The appearance of the chart can be modified dynamically, as well; past installments in this article series showed how to programmatically customize the axes, labels, and other appearance-related settings. However, it is possible to statically define the chart's data and appearance strictly through the control's declarative markup. One of the demos examined in the Getting Started article rendered a column chart with seven columns whose labels and values were defined statically in the <asp:Series> tag's <Points> collection. Given this functionality, it should come as no surprise that the Microsoft Chart Controls also support serialization. Serialization is the process of persisting the state of a control or an object to some other medium, such as to disk. Deserialization is the inverse process, and involves taking the persisted data and recreating the control or object. With just a few lines of code you can persist the appearance settings, the data, or both to a file on disk or to any stream. Likewise, it takes just a few lines of codes to reconstitute a chart from the persisted information. This article shows how to use the Microsoft Chart Control's serialization functionality by examining a demo application that allows users to create custom charts, specifying the data to plot and some appearance-related settings. The user can then save a "snapshot" of this chart, which persists its appearance and data to a record in a database. From another page, users can view these saved chart snapshots. Read on to learn more! Read More >

    Read the article

  • Queued Loadtest to remove Concurrency issues using Shared Data Service in OpenScript

    - by stefan.thieme(at)oracle.com
    Queued Processing to remove Concurrency issues in Loadtest ScriptsSome scripts act on information returned by the server, e.g. act on first item in the returned list of pending tasks/actions. This may lead to concurrency issues if the virtual users simulated in a load test scenario are not synchronized in some way.As the load test cases should be carried out in a comparable and straight forward manner simply cancel a transaction in case a collision occurs is clearly not an option. In case you increase the number of virtual users this approach would lead to a high number of requests for the early steps in your transaction (e.g. login, retrieve list of action points, assign an action point to the virtual user) but later steps would be rarely visited successfully or at all, depending on the application logic.A way to tackle this problem is to enqueue the virtual users in a Shared Data Service queue. Only the first virtual user in this queue will be allowed to carry out the critical steps (retrieve list of action points, assign an action point to the virtual user) in your transaction at any one time.Once a virtual user has passed the critical path it will dequeue himself from the head of the queue and continue with his actions. This does theoretically allow virtual users to run in parallel all steps of the transaction which are not part of the critical path.In practice it has been seen this is rarely the case, though it does not allow adding more than N users to perform a transaction without causing delays due to virtual users waiting in the queue. N being the time of the total transaction divided by the sum of the time of all critical steps in this transaction.While this problem can be circumvented by allowing multiple queues to act on individual segments of the list of actions, e.g. per country filter, ends with 0..9 filter, etc.This would require additional handling of these additional queues of slots for the virtual users at the head of the queue in order to maintain the mutually exclusive access to the first element in the list returned by the server at any one time of the load test. Such an improved handling of multiple queues and/or multiple slots is above the subject of this paper.Shared Data Services Pre-RequisitesStart WebLogic Server to host Shared Data ServicesYou will have to make sure that your WebLogic server is installed and started. Shared Data Services may not work if you installed only the minimal installation package for OpenScript. If however you installed the default package including OLT and OTM, you may follow the instructions below to start and verify WebLogic installation.To start the WebLogic Server deployed underneath of Oracle Load Testing and/or Oracle Test Manager you can go to your Start menu, Oracle Application Testing Suite and select the Restart Oracle Application Testing Suite Application Service entry from the Tools submenu.To verify the service has been started you can run the Microsoft Management Console for Services by Selecting Run from the Start Menu and entering services.msc. Look for the entry that reads Oracle Application Testing Suite Application Service, once it has changed it status from Starting to Started you can proceed to verify the login. Please note that this may take several minutes, I would say up to 10 minutes depending on the strength of your CPU horse-power.Verify WebLogic Server user credentialsYou will have to make sure that your WebLogic Server is installed and started. Next open the Oracle WebLogic Server Adminstration Console on http://localhost:8088/console.It may take a while until the application is deployed and started. It may display the following until the Administration Console has been deployed on the fly.Afterwards you can login using the username oats and the password that you selected during install time for your Application Testing Suite administrative purposes.This will bring up the Home page of you WebLogic Server. You have actually verified that you are able to login with these credentials already. However if you want to check the details, navigate to Security Realms, myrealm, Users and Groups tab.Here you could add users to your WebLogic Server which could be used in the later steps. Details on the Groups required for such a custom user to work are exceeding this quick overview and have to be selected with the WebLogic Server Adminstration Guide in mind.Shared Data Services pre-requisites for Load testingOpenScript Preferences have to be set to enable Encryption and provide a default Shared Data Service Connection for Playback.These are pre-requisites you want to use for load testing with Shared Data Services.Please note that the usage of the Connection Parameters (individual directive in the script) for Shared Data Services did not playback reliably in the current version 9.20.0370 of Oracle Load Testing (OLT) and encryption of credentials still seemed to be mandatory as well.General Encryption settingsSelect OpenScript Preferences from the View menu and navigate to the General, Encryption entry in the tree on the left. Select the Encrypt script data option from the list and enter the same password that you used for securing your WebLogic Server Administration Console.Enable global shared data access credentialsSelect OpenScript Preferences from the View menu and navigate to the Playback, Shared Data entry in the tree on the left. Enable the global shared data access credentials and enter the Address, User name and Password determined for your WebLogic Server to host Shared Data Services.Please note, that you may want to replace the localhost in Address with the hosts realname in case you plan to run load tests with Loadtest Agents running on remote systems.Queued Processing of TransactionsEnable Shared Data Services Module in Script PropertiesThe Shared Data Services Module has to be enabled for each Script that wants to employ the Shared Data Service Queue functionality in OpenScript. It can be enabled under the Script menu selecting Script Properties. On the Script Properties Dialog select the Modules section and check Shared Data to enable Shared Data Service Module for your script. Checking the Shared Data Services option will effectively add a line to your script code that adds the sharedData ScriptService to your script class of IteratingVUserScript.@ScriptService oracle.oats.scripting.modules.sharedData.api.SharedDataService sharedData;Record your scriptRecord your script as usual and then add the following things for Queue handling in the Initialize code block, before the first step and after the last step of your critical path and in the Finalize code block.The java code to be added at individual locations is explained in the following sections in full detail.Create a Shared Data Queue in InitializeTo create a Shared Data Queue go to the Java view of your script and enter the following statements to the initialize() code block.info("Create queueA with life time of 120 minutes");sharedData.createQueue("queueA", 120);This will create an instantiation of the Shared Data Queue object named queueA which is maintained for upto 120 minutes.If you want to use the code for multiple scripts, make sure to use a different queue name for each one here and in the subsequent steps. You may even consider to use a dynamic queueName based on filters of your result list being concurrently accessed.Prepare a unique id for each IterationIn order to keep track of individual virtual users in our queue we need to create a unique identifier from the virtual user id and the used username right after retrieving the next record from our databank file.getDatabank("Usernames").getNextDatabankRecord();getVariables().set("usernameValue1","VU_{{@vuid}}_{{@iterationnum}}_{{db.Usernames.Username}}_{{@timestamp}}_{{@random(10000)}}");String usernameValue = getVariables().get("usernameValue1");info("Now running virtual user " + usernameValue);As you can see from the above code block, we have set the OpenScript variable usernameValue1 to VU_{{@vuid}}_{{@iterationnum}}_{{db.Usernames.Username}}_{{@timestamp}}_{{@random(10000)}} which is a concatenation of the virtual user id and the iterationnumber for general uniqueness; as well as the username from our databank, the timestamp and a random number for making it further unique and ease spotting of errors.Not all of these fields are actually required to make it really unique, but adding the queue name may also be considered to help troubleshoot multiple queues.The value is then retrieved with the getVariables.get() method call and assigned to the usernameValue String used throughout the script.Please note that moving the getDatabank("Usernames").getNextDatabankRecord(); call to the initialize block was later considered to remove concurrency of multiple virtual users running with the same userid and therefor accessing the same "My Inbox" in step 6. This will effectively give each virtual user a userid from the databank file. Make sure you have enough userids to remove this second hurdle.Enqueue and attend Queue before Critical PathTo maintain the right order of virtual users being allowed into the critical path of the transaction the following pseudo step has to be added in front of the first critical step. In the case of this example this is right in front of the step where we retrieve the list of actions from which we select the first to be assigned to us.beginStep("[0] Waiting in the Queue", 0);{info("Enqueued virtual user " + usernameValue + " at the end of queueA");sharedData.offerLast("queueA", usernameValue);info("Wait until the user is the first in queueA");String queueValue1 = null;do {// we wait for at least 0.7 seconds before we check the head of the// queue. This is the time it takes one user to move through the// critical path, i.e. pass steps [5] Enter country and [6] Assign// to meThread.sleep(700);queueValue1 = (String) sharedData.peekFirst("queueA");info("The first user in queueA is currently: '" + queueValue1 + "' " + queueValue1.getClass() + " length " + queueValue1.length() );info("The current user is '"+ usernameValue + "' " + usernameValue.getClass() + " length " + usernameValue.length() + ": indexOf " + usernameValue.indexOf(queueValue1) + " equals " + usernameValue.equals(queueValue1) );} while ( queueValue1.indexOf(usernameValue) < 0 );info("Now the user is the first in queueA");}endStep();This will enqueue the username to the tail of our Queue. It will will wait for at least 700 milliseconds, the time it takes for one user to exit the critical path and then compare the head of our queue with it's username. This last step will be repeated while the two are not equal (indexOf less than zero). If they are equal the indexOf will yield a value of zero or larger and we will perform the critical steps.Dequeue after Critical PathAfter the virtual user has left the critical path and complete its last step the following code block needs to dequeue the virtual user. In the case of our example this is right after the action has been actually assigned to the virtual user. This will allow the next virtual user to retrieve the list of actions still available and in turn let him make his selection/assignment.info("Get and remove the current user from the head of queueA");String pollValue1 = (String) sharedData.pollFirst("queueA");The current user is removed from the head of the queue. The next one will now be able to match his username against the head of the queue.Clear and Destroy Queue for FinishWhen the script has completed, it should clear and destroy the queue. This code block can be put in the finish block of your script and/or in a separate script in order to clear and remove the queue in case you have spotted an error or want to reset the queue for some reason.info("Clear queueA");sharedData.clearQueue("queueA");info("Destroy queueA");sharedData.destroyQueue("queueA");The users waiting in queueA are cleared and the queue is destroyed. If you have scripts still executing they will be caught in a loop.I found it better to maintain a separate Reset Queue script which contained only the following code in the initialize() block. I use to call this script to make sure the queue is cleared in between multiple Loadtest runs. This script could also even be added as the first in a larger scenario, which would execute it only once at very start of the Loadtest and make sure the queues do not contain any stale entries.info("Create queueA with life time of 120 minutes");sharedData.createQueue("queueA", 120);info("Clear queueA");sharedData.clearQueue("queueA");This will create a Shared Data Queue instance of queueA and clear all entries from this queue.Monitoring QueueWhile creating the scripts it was useful to monitor the contents, i.e. the current first user in the Queue. The following code block will make sure the Shared Data Queue is accessible in the initialize() block.info("Create queueA with life time of 120 minutes");sharedData.createQueue("queueA", 120);In the run() block the following code will continuously monitor the first element of the Queue and write an informational message with the current username Value to the Result window.info("Monitor the first users in queueA");String queueValue1 = null;do {queueValue1 = (String) sharedData.peekFirst("queueA");if (queueValue1 != null)info("The first user in queueA is currently: '" + queueValue1 + "' " + queueValue1.getClass() + " length " + queueValue1.length() );} while ( true );This script can be run from OpenScript parallel to a loadtest performed by the Oracle Load Test.However it is not recommend to run this in a production loadtest as the performance impact is unknown. Accessing the Queue's head with the peekFirst() method has been reported with about 2 seconds response time by both OpenScript and OTL. It is advised to log a Service Request to see if this could be lowered in future releases of Application Testing Suite, as the pollFirst() and even offerLast() writing to the tail of the Queue usually returned after an average 0.1 seconds.Debugging QueueWhile debugging the scripts the following was useful to remove single entries from its head, i.e. the current first user in the Queue. The following code block will make sure the Shared Data Queue is accessible in the initialize() block.info("Create queueA with life time of 120 minutes");sharedData.createQueue("queueA", 120);In the run() block the following code will remove the first element of the Queue and write an informational message with the current username Value to the Result window.info("Get and remove the current user from the head of queueA");String pollValue1 = (String) sharedData.pollFirst("queueA");info("The first user in queueA was currently: '" + pollValue1 + "' " + pollValue1.getClass() + " length " + pollValue1.length() );ReferencesOracle Functional Testing OpenScript User's Guide Version 9.20 [E15488-05]Chapter 17 Using the Shared Data Modulehttp://download.oracle.com/otn/nt/apptesting/oats-docs-9.21.0030.zipOracle Fusion Middleware Oracle WebLogic Server Administration Console Online Help 11g Release 1 (10.3.4) [E13952-04]Administration Console Online Help - Manage users and groupshttp://download.oracle.com/docs/cd/E17904_01/apirefs.1111/e13952/taskhelp/security/ManageUsersAndGroups.htm

    Read the article

  • Suggested Web Application Framework and Database for Enterprise, “Big-Data” App?

    - by willOEM
    I have a web application that I have been developing for a small group within my company over the past few years, using Pipeline Pilot (plus jQuery and Python scripting) for web development and back-end computation, and Oracle 10g for my RDBMS. Users upload experimental genomic data, which is parsed into a database, and made available for querying, transformation, and reporting. Experimental data sets are large and have many layers of metadata. A given experimental data record might have a foreign key relationship with a table that describes this data point's assay. Assays can cover multiple genes, which can have multiple transcript, which can have multiple mutations, which can affect multiple signaling pathways, etc. Users need to approach this data from any point in those layers in the metadata. Since all data sets for a given data type can run over a billion rows, this results in some large, dynamic queries that are hard to predict. New data sets are added on a weekly basis (~1GB per set). Experimental data is never updated, but the associated metadata can be updated weekly for a few records and yearly for most others. For every data set insert the system sees, there will be between 10 and 100 selects run against it and associated data. It is okay for updates and inserts to run slow, so long as queries run quick and are as up-to-date as possible. The application continues to grow in size and scope and is already starting to run slower than I like. I am worried that we have about outgrown Pipeline Pilot, and perhaps Oracle (as the sole database). Would a NoSQL database or an OLAP system be appropriate here? What web application frameworks work well with systems like this? I'd like the solution to be something scalable, portable and supportable X-years down the road. Here is the current state of the application: Web Server/Data Processing: Pipeline Pilot on Windows Server + IIS Database: Oracle 10g, ~1TB of data, ~180 tables with several billion-plus row tables Network Storage: Isilon, ~50TB of low-priority raw data

    Read the article

  • BizTalk 2009 - The Scope of the Table Looping Functoid

    - by StuartBrierley
    When mapping in BizTalk you will find there are times when you need to map from flat and dispersed elemements in your source schema to a repeated record with child elements in your destination schema.  Below is an example of how you can make use of the Table Looping Functoid to bring together these flat elements and create your repeated group.  Although this example is purposely simple, I have previsouly encounted this issue on a much more complex scale when mapping the response from a credit scoring agency where all the applicant details were supplied in separate parts of a very flat schema. Consider the source and destination schemas as follows:   Although the Table Looping Functoid states that the first input must be a scoping element linked from a repeating group, you can actually also make use of a constant value.  In this case I know that the source schema always contains two people, so I set this to two. Then you need to set the number of columns in your table, in this case 2 (name and sex) and link all the required fields from the source schema. Following this you can configure the table. You can then add the Table Extractor functoids and complete the map. If you now validate this map you will see that BizTalk will warn you about the scoping link for the Table Looping Functoid, but this can be safely ignored. C:\Code\Developer Folders\Stuart Brierley\Test Mapping\TableLooping.btm: warning btm1071: A first input of the Table-Looping functoid must be a link from a Source Tree Node which acts as the scoping parameter. Testing the map will produce the following output:

    Read the article

  • Building a Store Locator ASP.NET Application Using Google Maps API (Part 3)

    Over the past two weeks I've showed how to build a store locator application using ASP.NET and the free Google Maps API and Google's geocoding service. Part 1 looked at creating the database to record the store locations. This database contains a table named Stores with columns capturing each store's address and latitude and longitude coordinates. Part 1 also showed how to use Google's geocoding service to translate a user-entered address into latitude and longitude coordinates, which could then be used to retrieve and display those stores within (roughly) a 15 mile area. At the end of Part 1, the results page listed the nearby stores in a grid. In Part 2 we used the Google Maps API to add an interactive map to the search results page, with each nearby store displayed on the map as a marker. The map added in Part 2 certainly improves the search results page, but the way the nearby stores are displayed on the map leaves a bit to be desired. For starters, each nearby store is displayed on the map using the same marker icon, namely a red pushpin. This makes it difficult to match up the nearby stores listed in the grid with those displayed on the map. Hovering the mouse over a marker on the map displays the store number in a tooltip, but ideally a user could click a marker to see more detailed information about the store, such as its address, phone number, a photo of the storefront, and so forth. This third and final installment shows how to enhance the map created in Part 2. Specifically, we'll see how to customize the marker icons displayed in the map to make it easier to identify which marker corresponds to which nearby store location. We'll also look at adding rich popup windows to each marker, which includes detailed store information and can be updated further to include pictures and other HTML content. Read on to learn more! Read More >

    Read the article

  • Building a Store Locator ASP.NET Application Using Google Maps API (Part 3)

    Over the past two weeks I've showed how to build a store locator application using ASP.NET and the free Google Maps API and Google's geocoding service. Part 1 looked at creating the database to record the store locations. This database contains a table named Stores with columns capturing each store's address and latitude and longitude coordinates. Part 1 also showed how to use Google's geocoding service to translate a user-entered address into latitude and longitude coordinates, which could then be used to retrieve and display those stores within (roughly) a 15 mile area. At the end of Part 1, the results page listed the nearby stores in a grid. In Part 2 we used the Google Maps API to add an interactive map to the search results page, with each nearby store displayed on the map as a marker. The map added in Part 2 certainly improves the search results page, but the way the nearby stores are displayed on the map leaves a bit to be desired. For starters, each nearby store is displayed on the map using the same marker icon, namely a red pushpin. This makes it difficult to match up the nearby stores listed in the grid with those displayed on the map. Hovering the mouse over a marker on the map displays the store number in a tooltip, but ideally a user could click a marker to see more detailed information about the store, such as its address, phone number, a photo of the storefront, and so forth. This third and final installment shows how to enhance the map created in Part 2. Specifically, we'll see how to customize the marker icons displayed in the map to make it easier to identify which marker corresponds to which nearby store location. We'll also look at adding rich popup windows to each marker, which includes detailed store information and can be updated further to include pictures and other HTML content. Read on to learn more! Read More >Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Buy HTC HD7 Windows Phone 7 From Airtel In India @ Rs. 29990

    - by Gopinath
    Are you looking for HTC HD 7 Windows Phone 7 in India? Head over to Airtel showroom near you to grab one. Airtel in partnership with HTC is offering HD 7 Windows Phone 7 for Rs. 29990 and users will get 2 GB of data usage for 6 months at Rs. 300. Mr. Shireesh Joshi, CMO-Mobile Services of Bharti Airtel,  in a press conference says We are delighted with the opportunity to bring the eagerly-awaited HTC HD7 Smartphone in India. Combining the strength of the airtel brand and network with the innovation and design of HTC and the great user-interface of Windows Phone 7, we are happy to bring another first for our customers that will take mobile communications to a whole new level. The HD7 has a 4.3-inch display, kickstand to rest your phone on a table, 5MP autofocus camera that allows you to record 720p videos, 1GHz processor, 576MB of RAM and has 16GB of internal memory. Even though this is the official launch of HTC HD7 in India, this phone is available in the market for quite sometime at an approximate price of Rs. 27000/-. So it’s your call to decide whether buy it at HTC authorized retailers like Airtel for Rs.29K  or in the market for Rs 27K. HTC HD 7 Promo Video Thanks Fonearena This article titled,Buy HTC HD7 Windows Phone 7 From Airtel In India @ Rs. 29990, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Anunciando: Grandes Melhorias para Web Sites da Windows Azure

    - by Leniel Macaferi
    Estou animado para anunciar algumas grandes melhorias para os Web Sites da Windows Azure que introduzimos no início deste verão.  As melhorias de hoje incluem: uma nova opção de hospedagem adaptável compartilhada de baixo custo, suporte a domínios personalizados para websites hospedados em modo compartilhado ou em modo reservado usando registros CNAME e A-Records (o último permitindo naked domains), suporte para deployment contínuo usando tanto CodePlex e GitHub, e a extensibilidade FastCGI. Todas essas melhorias estão agora online em produção e disponíveis para serem usadas imediatamente. Nova Camada Escalonável "Compartilhada" A Windows Azure permite que você implante e hospede até 10 websites em um ambiente gratuito e compartilhado com múltiplas aplicações. Você pode começar a desenvolver e testar websites sem nenhum custo usando este modo compartilhado (gratuito). O modo compartilhado suporta a capacidade de executar sites que servem até 165MB/dia de conteúdo (5GB/mês). Todas as capacidades que introduzimos em Junho com esta camada gratuita permanecem inalteradas com a atualização de hoje. Começando com o lançamento de hoje, você pode agora aumentar elasticamente seu website para além desta capacidade usando uma nova opção "shared" (compartilhada) de baixo custo (a qual estamos apresentando hoje), bem como pode usar a opção "reserved instance" (instância reservada) - a qual suportamos desde Junho. Aumentar a capacidade de qualquer um desses modos é fácil. Basta clicar na aba "scale" (aumentar a capacidade) do seu website dentro do Portal da Windows Azure, escolher a opção de modo de hospedagem que você deseja usar com ele, e clicar no botão "Salvar". Mudanças levam apenas alguns segundos para serem aplicadas e não requerem nenhum código para serem alteradas e também não requerem que a aplicação seja reimplantada/reinstalada: A seguir estão mais alguns detalhes sobre a nova opção "shared" (compartilhada), bem como a opção existente "reserved" (reservada): Modo Compartilhado Com o lançamento de hoje, estamos introduzindo um novo modo de hospedagem de baixo custo "compartilhado" para Web Sites da Windows Azure. Um website em execução no modo compartilhado é implantado/instalado em um ambiente de hospedagem compartilhado com várias outras aplicações. Ao contrário da opção de modo free (gratuito), um web-site no modo compartilhado não tem quotas/limite máximo para a quantidade de largura de banda que o mesmo pode servir. Os primeiros 5 GB/mês de banda que você servir com uma website compartilhado é grátis, e então você passará a pagar a taxa padrão "pay as you go" (pague pelo que utilizar) da largura de banda de saída da Windows Azure quando a banda de saída ultrapassar os 5 GB. Um website em execução no modo compartilhado agora também suporta a capacidade de mapear múltiplos nomes de domínio DNS personalizados, usando ambos CNAMEs e A-records para tanto. O novo suporte A-record que estamos introduzindo com o lançamento de hoje oferece a possibilidade para você suportar "naked domains" (domínios nús - sem o www) com seus web-sites (por exemplo, http://microsoft.com além de http://www.microsoft.com). Nós também, no futuro, permitiremos SSL baseada em SNI como um recurso nativo nos websites que rodam em modo compartilhado (esta funcionalidade não é suportada com o lançamento de hoje - mas chagará mais tarde ainda este ano, para ambos as opções de hospedagem - compartilhada e reservada). Você paga por um website no modo compartilhado utilizando o modelo padrão "pay as you go" que suportamos com outros recursos da Windows Azure (ou seja, sem custos iniciais, e você só paga pelas horas nas quais o recurso estiver ativo). Um web-site em execução no modo compartilhado custa apenas 1,3 centavos/hora durante este período de preview (isso dá uma média de $ 9.36/mês ou R$ 19,00/mês - dólar a R$ 2,03 em 17-Setembro-2012) Modo Reservado Além de executar sites em modo compartilhado, também suportamos a execução dos mesmos dentro de uma instância reservada. Quando rodando em modo de instância reservada, seus sites terão a garantia de serem executados de maneira isolada dentro de sua própria VM (virtual machine - máquina virtual) Pequena, Média ou Grande (o que significa que, nenhum outro cliente da Windows azure terá suas aplicações sendo executadas dentro de sua VM. Somente as suas aplicações). Você pode executar qualquer número de websites dentro de uma máquina virtual, e não existem quotas para limites de CPU ou memória. Você pode executar seus sites usando uma única VM de instância reservada, ou pode aumentar a capacidade tendo várias instâncias (por exemplo, 2 VMs de médio porte, etc.). Dimensionar para cima ou para baixo é fácil - basta selecionar a VM da instância "reservada" dentro da aba "scale" no Portal da Windows Azure, escolher o tamanho da VM que você quer, o número de instâncias que você deseja executar e clicar em salvar. As alterações têm efeito em segundos: Ao contrário do modo compartilhado, não há custo por site quando se roda no modo reservado. Em vez disso, você só paga pelas instâncias de VMs reservadas que você usar - e você pode executar qualquer número de websites que você quiser dentro delas, sem custo adicional (por exemplo, você pode executar um único site dentro de uma instância de VM reservada ou 100 websites dentro dela com o mesmo custo). VMs de instâncias reservadas têm um custo inicial de $ 8 cents/hora ou R$ 16 centavos/hora para uma pequena VM reservada. Dimensionamento Elástico para Cima/para Baixo Os Web Sites da Windows Azure permitem que você dimensione para cima ou para baixo a sua capacidade dentro de segundos. Isso permite que você implante um site usando a opção de modo compartilhado, para começar, e em seguida, dinamicamente aumente a capacidade usando a opção de modo reservado somente quando você precisar - sem que você tenha que alterar qualquer código ou reimplantar sua aplicação. Se o tráfego do seu site diminuir, você pode diminuir o número de instâncias reservadas que você estiver usando, ou voltar para a camada de modo compartilhado - tudo em segundos e sem ter que mudar o código, reimplantar a aplicação ou ajustar os mapeamentos de DNS. Você também pode usar o "Dashboard" (Painel de Controle) dentro do Portal da Windows Azure para facilmente monitorar a carga do seu site em tempo real (ele mostra não apenas as solicitações/segundo e a largura de banda consumida, mas também estatísticas como a utilização de CPU e memória). Devido ao modelo de preços "pay as you go" da Windows Azure, você só paga a capacidade de computação que você usar em uma determinada hora. Assim, se o seu site está funcionando a maior parte do mês em modo compartilhado (a $ 1.3 cents/hora ou R$ 2,64 centavos/hora), mas há um final de semana em que ele fica muito popular e você decide aumentar sua capacidade colocando-o em modo reservado para que seja executado em sua própria VM dedicada (a $ 8 cents/hora ou R$ 16 centavos/hora), você só terá que pagar os centavos/hora adicionais para as horas em que o site estiver sendo executado no modo reservado. Você não precisa pagar nenhum custo inicial para habilitar isso, e uma vez que você retornar seu site para o modo compartilhado, você voltará a pagar $ 1.3 cents/hora ou R$ 2,64 centavos/hora). Isto faz com que essa opção seja super flexível e de baixo custo. Suporte Melhorado para Domínio Personalizado Web sites em execução no modo "compartilhado" ou no modo "reservado" suportam a habilidade de terem nomes personalizados (host names) associados a eles (por exemplo www.mysitename.com). Você pode associar múltiplos domínios personalizados para cada Web Site da Windows Azure. Com o lançamento de hoje estamos introduzindo suporte para registros A-Records (um recurso muito pedido pelos usuários). Com o suporte a A-Record, agora você pode associar domínios 'naked' ao seu Web Site da Windows Azure - ou seja, em vez de ter que usar www.mysitename.com você pode simplesmente usar mysitename.com (sem o prefixo www). Tendo em vista que você pode mapear vários domínios para um único site, você pode, opcionalmente, permitir ambos domínios (com www e a versão 'naked') para um site (e então usar uma regra de reescrita de URL/redirecionamento (em Inglês) para evitar problemas de SEO). Nós também melhoramos a interface do usuário para o gerenciamento de domínios personalizados dentro do Portal da Windows Azure como parte do lançamento de hoje. Clicando no botão "Manage Domains" (Gerenciar Domínios) na bandeja na parte inferior do portal agora traz uma interface de usuário personalizada que torna fácil gerenciar/configurar os domínios: Como parte dessa atualização nós também tornamos significativamente mais suave/mais fácil validar a posse de domínios personalizados, e também tornamos mais fácil alternar entre sites/domínios existentes para Web Sites da Windows Azure, sem que o website fique fora do ar. Suporte a Deployment (Implantação) contínua com Git e CodePlex ou GitHub Um dos recursos mais populares que lançamos no início deste verão foi o suporte para a publicação de sites diretamente para a Windows Azure usando sistemas de controle de código como TFS e Git. Esse recurso fornece uma maneira muito poderosa para gerenciar as implantações/instalações da aplicação usando controle de código. É realmente fácil ativar este recurso através da página do dashboard de um web site: A opção TFS que lançamos no início deste verão oferece uma solução de implantação contínua muito rica que permite automatizar os builds e a execução de testes unitários a cada vez que você atualizar o repositório do seu website, e em seguida, se os testes forem bem sucedidos, a aplicação é automaticamente publicada/implantada na Windows Azure. Com o lançamento de hoje, estamos expandindo nosso suporte Git para também permitir cenários de implantação contínua integrando esse suporte com projetos hospedados no CodePlex e no GitHub. Este suporte está habilitado para todos os web-sites (incluindo os que usam o modo "free" (gratuito)). A partir de hoje, quando você escolher o link "Set up Git publishing" (Configurar publicação Git) na página do dashboard de um website, você verá duas opções adicionais quando a publicação baseada em Git estiver habilitada para o web-site: Você pode clicar em qualquer um dos links "Deploy from my CodePlex project" (Implantar a partir do meu projeto no CodePlex) ou "Deploy from my GitHub project"  (Implantar a partir do meu projeto no GitHub) para seguir um simples passo a passo para configurar uma conexão entre o seu website e um repositório de código que você hospeda no CodePlex ou no GitHub. Uma vez que essa conexão é estabelecida, o CodePlex ou o GitHub automaticamente notificará a Windows Azure a cada vez que um checkin ocorrer. Isso fará com que a Windows Azure faça o download do código e compile/implante a nova versão da sua aplicação automaticamente.  Os dois vídeos a seguir (em Inglês) mostram quão fácil é permitir esse fluxo de trabalho ao implantar uma app inicial e logo em seguida fazer uma alteração na mesma: Habilitando Implantação Contínua com os Websites da Windows Azure e CodePlex (2 minutos) Habilitando Implantação Contínua com os Websites da Windows Azure e GitHub (2 minutos) Esta abordagem permite um fluxo de trabalho de implantação contínua realmente limpo, e torna muito mais fácil suportar um ambiente de desenvolvimento em equipe usando Git: Nota: o lançamento de hoje suporta estabelecer conexões com repositórios públicos do GitHub/CodePlex. Suporte para repositórios privados será habitado em poucas semanas. Suporte para Múltiplos Branches (Ramos de Desenvolvimento) Anteriormente, nós somente suportávamos implantar o código que estava localizado no branch 'master' do repositório Git. Muitas vezes, porém, os desenvolvedores querem implantar a partir de branches alternativos (por exemplo, um branch de teste ou um branch com uma versão futura da aplicação). Este é agora um cenário suportado - tanto com projetos locais baseados no git, bem como com projetos ligados ao CodePlex ou GitHub. Isto permite uma variedade de cenários úteis. Por exemplo, agora você pode ter dois web-sites - um em "produção" e um outro para "testes" - ambos ligados ao mesmo repositório no CodePlex ou no GitHub. Você pode configurar um dos websites de forma que ele sempre baixe o que estiver presente no branch master, e que o outro website sempre baixe o que estiver no branch de testes. Isto permite uma maneira muito limpa para habilitar o teste final de seu site antes que ele entre em produção. Este vídeo de 1 minuto (em Inglês) demonstra como configurar qual branch usar com um web-site. Resumo Os recursos mostrados acima estão agora ao vivo em produção e disponíveis para uso imediato. Se você ainda não tem uma conta da Windows Azure, você pode inscrever-se em um teste gratuito para começar a usar estes recursos hoje mesmo. Visite o O Centro de Desenvolvedores da Windows Azure (em Inglês) para saber mais sobre como criar aplicações para serem usadas na nuvem. Nós teremos ainda mais novos recursos e melhorias chegando nas próximas semanas - incluindo suporte para os recentes lançamentos do Windows Server 2012 e .NET 4.5 (habilitaremos novas imagens de web e work roles com o Windows Server 2012 e NET 4.5 no próximo mês). Fique de olho no meu blog para detalhes assim que esses novos recursos ficarem disponíveis. Espero que ajude, - Scott P.S. Além do blog, eu também estou utilizando o Twitter para atualizações rápidas e para compartilhar links. Siga-me em: twitter.com/ScottGu Texto traduzido do post original por Leniel Macaferi.

    Read the article

  • eLearning event on HTML5 for Mobile with jQuery Mobile

    - by Wallym
    I'll be doing an eLearning event on HTML5 for Mobile with jQuery Mobile. There will also be a few items sprinkled in on ASP.NET Razor. Mobile development is a hot item. Customers are buying iPhones, iPads, Android devices, and many other mobile computing devices at an ever increasing record pace. Devices based on iOS and Android are nearly 80 percent of the marketplace. RIM continues to be dominant in the business area across the world. Nokia's growth with Windows Phone will grow on a worldwide basis. At the same time, clearly web development is a tremendous driver of applications, both on the public Internet and on private networks. How can developers target these various mobile platforms with web technologies? Developers can write web applications that take advantage of each mobile platform, but that is a lot of work. Into this space, the jQuery Mobile framework was developed. This eLearning series will provide an overview of mobile web development with jQuery Mobile, a detailed look at what the jQuery Mobile framework provides for us, how we can customize jQuery Mobile, and how we can use jQuery Mobile inside of ASP.NET.Link: http://elearning.left-brain.com/event/mobile-web-development

    Read the article

  • Project Euler 51: Ruby

    - by Ben Griswold
    In my attempt to learn Ruby out in the open, here’s my solution for Project Euler Problem 51.  I know I started back up with Python this week, but I have three more Ruby solutions in the hopper and I wanted to share. For the record, Project Euler 51 was the second hardest Euler problem for me thus far. Yeah. As always, any feedback is welcome. # Euler 51 # http://projecteuler.net/index.php?section=problems&id=51 # By replacing the 1st digit of *3, it turns out that six # of the nine possible values: 13, 23, 43, 53, 73, and 83, # are all prime. # # By replacing the 3rd and 4th digits of 56**3 with the # same digit, this 5-digit number is the first example # having seven primes among the ten generated numbers, # yielding the family: 56003, 56113, 56333, 56443, # 56663, 56773, and 56993. Consequently 56003, being the # first member of this family, is the smallest prime with # this property. # # Find the smallest prime which, by replacing part of the # number (not necessarily adjacent digits) with the same # digit, is part of an eight prime value family. timer_start = Time.now require 'mathn' def eight_prime_family(prime) 0.upto(9) do |repeating_number| # Assume mask of 3 or more repeating numbers if prime.count(repeating_number.to_s) >= 3 ctr = 1 (repeating_number + 1).upto(9) do |replacement_number| family_candidate = prime.gsub(repeating_number.to_s, replacement_number.to_s) ctr += 1 if (family_candidate.to_i).prime? end return true if ctr >= 8 end end false end # Wanted to loop through primes using Prime.each # but it took too long to get to the starting value. n = 9999 while n += 2 next if !n.prime? break if eight_prime_family(n.to_s) end puts n puts "Elapsed Time: #{(Time.now - timer_start)*1000} milliseconds"

    Read the article

  • Vision, Integration, Ability—Oracle is once again positioned as an E-Commerce Leader

    - by Jeri Kelley
    The new Gartner report is the fifth successive Magic Quadrant for E-Commerce to position Oracle as a leader. We’re proud of the result, but we’re not too surprised. Oracle Commerce’s functionality is uniquely aligned with a number of the major market trends Gartner describes in its report: from customers ‘expecting a seamless buying experience across all channels’, to organizations seeking to consolidate ‘B2B and B2C applications with a single underlying platform’. What we think sets Oracle Commerce apart Why are we a leader? We believe the key strengths of Oracle Commerce include: Outstanding Scalability and VersatilityOracle has a long and enviable track record of delivering B2B and B2C e-commerce solutions, and the Oracle Commerce solution supports a broad range of vertical industries – from retail to telecom, and manufacturing to distribution. Additionally, Oracle Commerce is engineered to scale simply and quickly to meet the changing needs of the enterprise. Oracle IntegrationOur commitment to seamless solutions integration allows customers to get the most from our ever evolving range of e-commerce and CX products—and deliver consistent, relevant, and personalized cross-channel buying experiences that drive customer satisfaction, and boost revenue. Experience and VisionOracle has a long and impressive history of delivering B2B and B2C e-commerce solutions to the world’s best brands. We’re constantly putting this experience to good use, and making our solutions even smarter. With powerful merchandising and business tools, and advanced promotions capabilities, Oracle Commerce is one of the most forward-thinking e-commerce solutions around. Read the reportYou can read Gartner’s full report here, or click here to find out more about our celebrated platform.

    Read the article

  • Great Blogs About Oracle Solaris 11

    - by Markus Weber
    Now that Oracle Solaris 11 has been released, why not blog about blogs. There is of course a tremendous amount of resource and information available, but valuable insights directly from people actually building the product is priceless. Here's a list of such great blogs. NOTE: If you think we missed some good ones, please let us know in the comments section !  Topic Title Author Top 11 Things My 11 favourite Solaris 11 features Darren Moffat Top 11 Things These are 11 of my favorite things! Mike Gerdts Top 11 Things 11 reason to love Solaris 11     Jim Laurent SysAdmin Resources Solaris 11 Resources for System Administrators Rick Ramsey Overview Oracle Solaris 11: The First Cloud OS Larry Wake Overview What's a "Cloud Operating System"? Harry Foxwell Overview What's New in Oracle Solaris 11 Jeff Victor Try it ! Virtually the fastest way to try Solaris 11 (and Solaris 10 zones) Dave Miner Upgrade Upgrading Solaris 11 Express b151a with support to Solaris 11 Alan Hargreaves IPS The IPS System Repository Tim Foster IPS Building a Solaris 11 repository without network connection Jim Laurent IPS IPS Self-assembly – Part 1: overlays Tim Foster IPS Self assembly – Part 2: multiple packages delivering configuration Tim Foster Security Immutable Zones on Encrypted ZFS Darren Moffat Security User home directory encryption with ZFS Darren Moffat Security Password (PAM) caching for Solaris su - "a la sudo" Darren Moffat Security Completely disabling root logins on Solaris 11 Darren Moffat Security OpenSSL Version in Solaris Darren Moffat Security Exciting Crypto Advances with the T4 processor and Oracle Solaris 11 Valerie Fenwick Performance Critical Threads Optimization Rafael Vanoni Performance SPARC T4-2 Delivers World Record SPECjvm2008 Result with Oracle Solaris 11 BestPerf Blog Performance Recent Benchmarks Using Oracle Solaris 11 BestPerf Blog Predictive Self Healing Introducing SMF Layers Sean Wilcox Predictive Self Healing Oracle Solaris 11 - New Fault Management Features Gavin Maltby Desktop What's new on the Solaris 11 Desktop? Calum Benson Desktop S11 X11: ye olde window system in today's new operating system Alan Coopersmith Desktop Accessible Oracle Solaris 11 - released! Peter Korn

    Read the article

  • Automated “ubuntu-12.04.1-server-amd64” OS installation on physical machine

    - by user285336
    We are using Physical server and are in process of Automated “ubuntu-12.04.1-server-amd64” OS installation on it. There are two HDD for OS installation purpose and there are RAID1 relation between them. This setup has been done through BIOS. The kickstart configuration file looks like this: #Generated by Kickstart Configurator #platform=AMD64 or Intel EM64T #System language lang en_US #Language modules to install langsupport en_US #System keyboard keyboard us #System mouse mouse #System timezone timezone Asia/Dili #Root password rootpw --iscrypted $1$Yl1QJyta$KzIT.kq3i9E5XaiQKcUJn/ #Initial user user ankit --fullname "Ankit" --iscrypted --password $1$c6Yflpea$pi1QQ59/jgywmGwBv25z3/ #Reboot after installation reboot #Use text mode install text #Install OS instead of upgrade install #Use Web installation url --url my_repo_location #System bootloader configuration bootloader --location=mbr #Clear the Master Boot Record zerombr yes #Partition clearing information clearpart --all --initlabel #Disk partitioning information part /boot --fstype ext4 --size 100 --ondisk sda part / --fstype ext4 --size 10000 --ondisk sda part /var --fstype ext4 --size 10000 --ondisk sda part swap --size 1024 --ondisk sdb #System authorization infomation auth --useshadow --enablemd5 #Network information network --bootproto=dhcp --device=eth0 #Firewall configuration firewall --enabled --trust=eth0 --http --ftp --ssh --telnet --smtp #X Window System configuration information xconfig --depth=8 --resolution=640x480 --defaultdesktop=GNOME But I am getting the below error : No root file system is defined Please suggest on this. Do we need to do any modification in kickstart configuration file. Any help in this regard will be very helpful for us. The automated Ubuntu OS installation is successful in Virtual Machine(VM) with the above ks.cfg (kickstart configuration file ) but failing in case of physical machine. Please suggest on this and if possible provide the new ks.cfg file to resolve above problem. Thanks & Regards, Rajesh Prasad

    Read the article

  • ASP.NET Meta Keywords and Description

    - by Ben Griswold
    Some of the ASP.NET 4 improvements around SEO are neat.  The ASP.NET 4 Page.MetaKeywords and Page.MetaDescription properties, for example, are a welcomed change.  There’s nothing earth-shattering going on here – you can now set these meta tags via your Master page’s code behind rather than relying on updates to your markup alone.  It isn’t difficult to manage meta keywords and descriptions without these ASP.NET 4 properties but I still appreciate the attention SEO is getting.  It’s nice to get gentle reminder via new coding features that some of the more subtle aspects of one’s application deserve thought and attention too.  For the record, this is how I currently manage my meta: <meta name="keywords"     content="<%= Html.Encode(ConfigurationManager.AppSettings["Meta.Keywords"]) %>" /> <meta name="description"     content="<%= Html.Encode(ConfigurationManager.AppSettings["Meta.Description"]) %>" /> All Master pages assume the same keywords and description values as defined by the application settings.  Nothing fancy. Nothing dynamic. But it’s manageable.  It works, but I’m looking forward to the new way in ASP.NET 4.

    Read the article

  • Commit in SQL

    - by PRajkumar
    SQL Transaction Control Language Commands (TCL)                                           (COMMIT) Commit Transaction As a SQL language we use transaction control language very frequently. Committing a transaction means making permanent the changes performed by the SQL statements within the transaction. A transaction is a sequence of SQL statements that Oracle Database treats as a single unit. This statement also erases all save points in the transaction and releases transaction locks. Oracle Database issues an implicit COMMIT before and after any data definition language (DDL) statement. Oracle recommends that you explicitly end every transaction in your application programs with a COMMIT or ROLLBACK statement, including the last transaction, before disconnecting from Oracle Database. If you do not explicitly commit the transaction and the program terminates abnormally, then the last uncommitted transaction is automatically rolled back.   Until you commit a transaction: ·         You can see any changes you have made during the transaction by querying the modified tables, but other users cannot see the changes. After you commit the transaction, the changes are visible to other users' statements that execute after the commit ·         You can roll back (undo) any changes made during the transaction with the ROLLBACK statement   Note: Most of the people think that when we type commit data or changes of what you have made has been written to data files, but this is wrong when you type commit it means that you are saying that your job has been completed and respective verification will be done by oracle engine that means it checks whether your transaction achieved consistency when it finds ok it sends a commit message to the user from log buffer but not from data buffer, so after writing data in log buffer it insists data buffer to write data in to data files, this is how it works.   Before a transaction that modifies data is committed, the following has occurred: ·         Oracle has generated undo information. The undo information contains the old data values changed by the SQL statements of the transaction ·         Oracle has generated redo log entries in the redo log buffer of the System Global Area (SGA). The redo log record contains the change to the data block and the change to the rollback block. These changes may go to disk before a transaction is committed ·         The changes have been made to the database buffers of the SGA. These changes may go to disk before a transaction is committed   Note:   The data changes for a committed transaction, stored in the database buffers of the SGA, are not necessarily written immediately to the data files by the database writer (DBWn) background process. This writing takes place when it is most efficient for the database to do so. It can happen before the transaction commits or, alternatively, it can happen some times after the transaction commits.   When a transaction is committed, the following occurs: 1.      The internal transaction table for the associated undo table space records that the transaction has committed, and the corresponding unique system change number (SCN) of the transaction is assigned and recorded in the table 2.      The log writer process (LGWR) writes redo log entries in the SGA's redo log buffers to the redo log file. It also writes the transaction's SCN to the redo log file. This atomic event constitutes the commit of the transaction 3.      Oracle releases locks held on rows and tables 4.      Oracle marks the transaction complete   Note:   The default behavior is for LGWR to write redo to the online redo log files synchronously and for transactions to wait for the redo to go to disk before returning a commit to the user. However, for lower transaction commit latency application developers can specify that redo be written asynchronously and that transaction do not need to wait for the redo to be on disk.   The syntax of Commit Statement is   COMMIT [WORK] [COMMENT ‘your comment’]; ·         WORK is optional. The WORK keyword is supported for compliance with standard SQL. The statements COMMIT and COMMIT WORK are equivalent. Examples Committing an Insert INSERT INTO table_name VALUES (val1, val2); COMMIT WORK; ·         COMMENT Comment is also optional. This clause is supported for backward compatibility. Oracle recommends that you used named transactions instead of commit comments. Specify a comment to be associated with the current transaction. The 'text' is a quoted literal of up to 255 bytes that Oracle Database stores in the data dictionary view DBA_2PC_PENDING along with the transaction ID if a distributed transaction becomes in doubt. This comment can help you diagnose the failure of a distributed transaction. Examples The following statement commits the current transaction and associates a comment with it: COMMIT     COMMENT 'In-doubt transaction Code 36, Call (415) 555-2637'; ·         WRITE Clause Use this clause to specify the priority with which the redo information generated by the commit operation is written to the redo log. This clause can improve performance by reducing latency, thus eliminating the wait for an I/O to the redo log. Use this clause to improve response time in environments with stringent response time requirements where the following conditions apply: The volume of update transactions is large, requiring that the redo log be written to disk frequently. The application can tolerate the loss of an asynchronously committed transaction. The latency contributed by waiting for the redo log write to occur contributes significantly to overall response time. You can specify the WAIT | NOWAIT and IMMEDIATE | BATCH clauses in any order. Examples To commit the same insert operation and instruct the database to buffer the change to the redo log, without initiating disk I/O, use the following COMMIT statement: COMMIT WRITE BATCH; Note: If you omit this clause, then the behavior of the commit operation is controlled by the COMMIT_WRITE initialization parameter, if it has been set. The default value of the parameter is the same as the default for this clause. Therefore, if the parameter has not been set and you omit this clause, then commit records are written to disk before control is returned to the user. WAIT | NOWAIT Use these clauses to specify when control returns to the user. The WAIT parameter ensures that the commit will return only after the corresponding redo is persistent in the online redo log. Whether in BATCH or IMMEDIATE mode, when the client receives a successful return from this COMMIT statement, the transaction has been committed to durable media. A crash occurring after a successful write to the log can prevent the success message from returning to the client. In this case the client cannot tell whether or not the transaction committed. The NOWAIT parameter causes the commit to return to the client whether or not the write to the redo log has completed. This behavior can increase transaction throughput. With the WAIT parameter, if the commit message is received, then you can be sure that no data has been lost. Caution: With NOWAIT, a crash occurring after the commit message is received, but before the redo log record(s) are written, can falsely indicate to a transaction that its changes are persistent. If you omit this clause, then the transaction commits with the WAIT behavior. IMMEDIATE | BATCH Use these clauses to specify when the redo is written to the log. The IMMEDIATE parameter causes the log writer process (LGWR) to write the transaction's redo information to the log. This operation option forces a disk I/O, so it can reduce transaction throughput. The BATCH parameter causes the redo to be buffered to the redo log, along with other concurrently executing transactions. When sufficient redo information is collected, a disk write of the redo log is initiated. This behavior is called "group commit", as redo for multiple transactions is written to the log in a single I/O operation. If you omit this clause, then the transaction commits with the IMMEDIATE behavior. ·         FORCE Clause Use this clause to manually commit an in-doubt distributed transaction or a corrupt transaction. ·         In a distributed database system, the FORCE string [, integer] clause lets you manually commit an in-doubt distributed transaction. The transaction is identified by the 'string' containing its local or global transaction ID. To find the IDs of such transactions, query the data dictionary view DBA_2PC_PENDING. You can use integer to specifically assign the transaction a system change number (SCN). If you omit integer, then the transaction is committed using the current SCN. ·         The FORCE CORRUPT_XID 'string' clause lets you manually commit a single corrupt transaction, where string is the ID of the corrupt transaction. Query the V$CORRUPT_XID_LIST data dictionary view to find the transaction IDs of corrupt transactions. You must have DBA privileges to view the V$CORRUPT_XID_LIST and to specify this clause. ·         Specify FORCE CORRUPT_XID_ALL to manually commit all corrupt transactions. You must have DBA privileges to specify this clause. Examples Forcing an in doubt transaction. Example The following statement manually commits a hypothetical in-doubt distributed transaction. Query the V$CORRUPT_XID_LIST data dictionary view to find the transaction IDs of corrupt transactions. You must have DBA privileges to view the V$CORRUPT_XID_LIST and to issue this statement. COMMIT FORCE '22.57.53';

    Read the article

  • Iterative Conversion

    - by stuart ramage
    Question Received: I am toying with the idea of migrating the current information first and the remainder of the history at a later date. I have heard that the conversion tool copes with this, but haven't found any information on how it does. Answer: The Toolkit will support iterative conversions as long as the original master data key tables (the CK_* tables) are not cleared down from Staging (the already converted Transactional Data would need to be cleared down) and the Production instance being migrated into is actually Production (we have migrated into a pre-prod instance in the past and then unloaded this and loaded it into the real PROD instance, but this will not work for your situation. You need to be migrating directly into your intended environment). In this case the migration tool will still know all about the original keys and the generated keys for the primary objects (Account, SA, etc.) and as such it will be able to link the data converted as part of a second pass onto these entities. It should be noted that this may result in the original opening balances potentially being displayed with an incorrect value (if we are talking about Financial Transactions) and also that care will have to be taken to ensure that all related objects are aligned (eg. A Bill must have a set to bill segments, meter reads and a financial transactions, and these entities cannot exist independantly). It should also be noted that subsequent runs of the conversion tool would need to be 'trimmed' to ensure that they are only doing work on the objects affected. You would not want to revalidate and migrate all Person, Account, SA, SA/SP, SP and Premise details since this information has already been processed, but you would definitely want to run the affected transactional record validation and keygen processes. There is no real "hard-and-fast" rule around this processing since is it specific to each implmentations needs, but the majority of the effort required should be detailed in the Conversion Tool section of the online help (under Adminstration/ The Conversion Tool). The major rule is to ensure that you only run the steps and validation/keygen steps that you need and do not do a complete rerun for your subsequent conversion.

    Read the article

  • Using Microsoft's Chart Controls In An ASP.NET Application: Serializing Chart Data

    In most usage scenarios, the data displayed in a Microsoft Chart control comes from some dynamic source, such as from a database query. The appearance of the chart can be modified dynamically, as well; past installments in this article series showed how to programmatically customize the axes, labels, and other appearance-related settings. However, it is possible to statically define the chart's data and appearance strictly through the control's declarative markup. One of the demos examined in the Getting Started article rendered a column chart with seven columns whose labels and values were defined statically in the <asp:Series> tag's <Points> collection. Given this functionality, it should come as no surprise that the Microsoft Chart Controls also support serialization. Serialization is the process of persisting the state of a control or an object to some other medium, such as to disk. Deserialization is the inverse process, and involves taking the persisted data and recreating the control or object. With just a few lines of code you can persist the appearance settings, the data, or both to a file on disk or to any stream. Likewise, it takes just a few lines of codes to reconstitute a chart from the persisted information. This article shows how to use the Microsoft Chart Control's serialization functionality by examining a demo application that allows users to create custom charts, specifying the data to plot and some appearance-related settings. The user can then save a "snapshot" of this chart, which persists its appearance and data to a record in a database. From another page, users can view these saved chart snapshots. Read on to learn more! Read More >

    Read the article

< Previous Page | 178 179 180 181 182 183 184 185 186 187 188 189  | Next Page >