Search Results

Search found 35371 results on 1415 pages for 'oracle technology network'.

Page 457/1415 | < Previous Page | 453 454 455 456 457 458 459 460 461 462 463 464  | Next Page >

  • Tutorial: Getting Started with the NoSQL JavaScript / Node.js API for MySQL Cluster

    - by Mat Keep
    Tutorial authored by Craig Russell and JD Duncan  The MySQL Cluster team are working on a new NoSQL JavaScript connector for MySQL. The objectives are simplicity and high performance for JavaScript users: - allows end-to-end JavaScript development, from the browser to the server and now to the world's most popular open source database - native "NoSQL" access to the storage layer without going first through SQL transformations and parsing. Node.js is a complete web platform built around JavaScript designed to deliver millions of client connections on commodity hardware. With the MySQL NoSQL Connector for JavaScript, Node.js users can easily add data access and persistence to their web, cloud, social and mobile applications. While the initial implementation is designed to plug and play with Node.js, the actual implementation doesn't depend heavily on Node, potentially enabling wider platform support in the future. Implementation The architecture and user interface of this connector are very different from other MySQL connectors in a major way: it is an asynchronous interface that follows the event model built into Node.js. To make it as easy as possible, we decided to use a domain object model to store the data. This allows for users to query data from the database and have a fully-instantiated object to work with, instead of having to deal with rows and columns of the database. The domain object model can have any user behavior that is desired, with the NoSQL connector providing the data from the database. To make it as fast as possible, we use a direct connection from the user's address space to the database. This approach means that no SQL (pun intended) is needed to get to the data, and no SQL server is between the user and the data. The connector is being developed to be extensible to multiple underlying database technologies, including direct, native access to both the MySQL Cluster "ndb" and InnoDB storage engines. The connector integrates the MySQL Cluster native API library directly within the Node.js platform itself, enabling developers to seamlessly couple their high performance, distributed applications with a high performance, distributed, persistence layer delivering 99.999% availability. The following sections take you through how to connect to MySQL, query the data and how to get started. Connecting to the database A Session is the main user access path to the database. You can get a Session object directly from the connector using the openSession function: var nosql = require("mysql-js"); var dbProperties = {     "implementation" : "ndb",     "database" : "test" }; nosql.openSession(dbProperties, null, onSession); The openSession function calls back into the application upon creating a Session. The Session is then used to create, delete, update, and read objects. Reading data The Session can read data from the database in a number of ways. If you simply want the data from the database, you provide a table name and the key of the row that you want. For example, consider this schema: create table employee (   id int not null primary key,   name varchar(32),   salary float ) ENGINE=ndbcluster; Since the primary key is a number, you can provide the key as a number to the find function. function onSession = function(err, session) {   if (err) {     console.log(err);     ... error handling   }   session.find('employee', 0, onData); }; function onData = function(err, data) {   if (err) {     console.log(err);     ... error handling   }   console.log('Found: ', JSON.stringify(data));   ... use data in application }; If you want to have the data stored in your own domain model, you tell the connector which table your domain model uses, by specifying an annotation, and pass your domain model to the find function. var annotations = new nosql.Annotations(); function Employee = function(id, name, salary) {   this.id = id;   this.name = name;   this.salary = salary;   this.giveRaise = function(percent) {     this.salary *= percent;   } }; annotations.mapClass(Employee, {'table' : 'employee'}); function onSession = function(err, session) {   if (err) {     console.log(err);     ... error handling   }   session.find(Employee, 0, onData); }; Updating data You can update the emp instance in memory, but to make the raise persistent, you need to write it back to the database, using the update function. function onData = function(err, emp) {   if (err) {     console.log(err);     ... error handling   }   console.log('Found: ', JSON.stringify(emp));   emp.giveRaise(0.12); // gee, thanks!   session.update(emp); // oops, session is out of scope here }; Using JavaScript can be tricky because it does not have the concept of block scope for variables. You can create a closure to handle these variables, or use a feature of the connector to remember your variables. The connector api takes a fixed number of parameters and returns a fixed number of result parameters to the callback function. But the connector will keep track of variables for you and return them to the callback. So in the above example, change the onSession function to remember the session variable, and you can refer to it in the onData function: function onSession = function(err, session) {   if (err) {     console.log(err);     ... error handling   }   session.find(Employee, 0, onData, session); }; function onData = function(err, emp, session) {   if (err) {     console.log(err);     ... error handling   }   console.log('Found: ', JSON.stringify(emp));   emp.giveRaise(0.12); // gee, thanks!   session.update(emp, onUpdate); // session is now in scope }; function onUpdate = function(err, emp) {   if (err) {     console.log(err);     ... error handling   } Inserting data Inserting data requires a mapped JavaScript user function (constructor) and a session. Create a variable and persist it: function onSession = function(err, session) {   var data = new Employee(999, 'Mat Keep', 20000000);   session.persist(data, onInsert);   } }; Deleting data To remove data from the database, use the session remove function. You use an instance of the domain object to identify the row you want to remove. Only the key field is relevant. function onSession = function(err, session) {   var key = new Employee(999);   session.remove(Employee, onDelete);   } }; More extensive queries We are working on the implementation of more extensive queries along the lines of the criteria query api. Stay tuned. How to evaluate The MySQL Connector for JavaScript is available for download from labs.mysql.com. Select the build: MySQL-Cluster-NoSQL-Connector-for-Node-js You can also clone the project on GitHub Since it is still early in development, feedback is especially valuable (so don't hesitate to leave comments on this blog, or head to the MySQL Cluster forum). Try it out and see how easy (and fast) it is to integrate MySQL Cluster into your Node.js platforms. You can learn more about other previewed functionality of MySQL Cluster 7.3 here

    Read the article

  • wrt54gl reboots; troubleshooting steps?

    - by Bill
    I am using about 10 wrt54gl's in a small school. I am using a combination of stock firmware and Tomato 1.25, slowly moving towards all Tomato. We have had these devices installed for several years without problems. Recently, more and more of the units have started to spontaneously reboot, usually during high-traffic times (but not always). For the most part, the rebooting is not critical for us, but the wrt54gl's temporarily revert to 192.168.1.1 on the LAN ethernet ports and conflict with a critical server that's already installed with that IP. (Yes -- we plan to move the server off that address, but it is an involved process.) Both Tomato and the stock firmware (several versions from recent to several years old) exhibit the same problem: random reboots and reverting to 192.168.1.1 and conflicting temporarily with our server until the firmware boot process finishes. Here are my questions: Any way to prevent the wrt54gl's from reverting to 192.168.1.1 during the boot process? I was thinking of doing a custom firmware mod, although I hate to go that direction. Any steps to take in troubleshooting the reboots? Only some of the wrt54gl's reboot, which is odd. Others stay online for weeks and months without issues. Thanks.

    Read the article

  • Database Web Service using Toplink DB Provider

    - by Vishal Jain
    With JDeveloper 11gR2 you can now create database based web services using JAX-WS Provider. The key differences between this and the already existing PL/SQL Web Services support is:Based on JAX-WS ProviderSupports SQL Queries for creating Web ServicesSupports Table CRUD OperationsThis is present as a new option in the New Gallery under 'Web Services'When you invoke the New Gallery option, it present you with three options to choose from:In this entry I will explain the options of creating service based on SQL queries and Table CRUD operations.SQL Query based Service When you select this option, on 'Next' page it asks you for the DB Conn details. You can also choose if you want SOAP 1.1 or 1.2 format. For this example, I will proceed with SOAP 1.1, the default option.On the Next page, you can give the SQL query. The wizard support Bind Variables, so you can parametrize your queries. Give "?" as a input parameter you want to give at runtime, and the "Bind Variables" button will get enabled. Here you can specify the name and type of the variable.Finish the wizard. Now you can test your service in Analyzer:See that the bind variable specified comes as a input parameter in the Analyzer Input Form:CRUD OperationsFor this, At Step 2 of Wizard, select the radio button "Generate Table CRUD Service Provider"At the next step, select the DB Connection and the table for which you want to generate the default set of operations:Finish the Wizard. Now, run the service in Analyzer for a quick check.See that all the basic operations are exposed:

    Read the article

  • Gilda Garretón, a Java Developer and Parallelism Computing Researcher

    - by Yolande
    In a new interview titled “Gilda Garretón, a Java Developer and Parallelism Computing Research,” Garretón shares her first-hand experience developing with Java and Java 7 for very large-scale integration (VLSI) of computer-aided design (CAD). Garretón gives an insightful overview of how Java is contributing to the parallelism development and to the Electric VLSI Design Systems, an open source VLSI CAD application used as a research platform for new CAD algorithms as well as the research flow for hardware test chips.  Garretón considers that parallelism programming is hard and complex, yet important developments are taking place.  "With the addition of the concurrent package in Java SE 6 and the Fork/Join feature in Java SE 7, developers have a chance to rely more on existing frameworks and dedicate more time to the essence of their parallel algorithms." Read the full article here  

    Read the article

  • Future Tech Duke

    - by Tori Wieldt
    Do you like the new Duke? Have you gotten the new Duke screensaver yet? Follow @java or Like I <3 Java on Facebook and get the latest 3D, animated "Future Tech Duke" screensaver.   If you haven't already, register now to watch the global July 7 Java 7 community celebration and learn more about Java moving forward. We are looking for questions from the community to be asked during the panel Q & A. Enter your questions as a comment here, or tweet it with #java7. There's lots of great content being created for Java 7: technical articles, videos, updated web pages (can you say "layer cake?"), T-shirts, presentations, and there will be lots of Java 7 content in the new Java Magazine. See you at the Java 7 celebration event! Duke will be there!

    Read the article

  • Portal 11.1.1.4 Certified with E-Business Suite

    - by Steven Chan
    Oracle Portal 11g allows you to build, deploy, and manage enterprise portals running on Oracle WebLogic Server.  Oracle Portal 11g includes integration with Oracle WebCenter Services 11g and BPEL, support for open portlet standards JSR 168, WSRP 2.0, and JSR 301.Portal 11g (11.1.1.4) is now certified with Oracle E-Business Suite Release.  Portal 11.1.1.4 is part of Oracle Fusion Middleware 11g Release 1 Version 11.1.1.4.0, also known as FMW 11g Patchset 3.  Certified E-Business Suite releases are:EBS Release 11i 11.5.10.2 + ATG RUP 7 and higherEBS Release 12.0.6 and higherEBS Release 12.1.1 and higherIf you're running a previous version of Portal, there are a number of certified and supported upgrade paths to Portal 11g (11.1.1.4):

    Read the article

  • TP-Link storage server accessing from Mac OS

    - by coure2011
    I have a storage/print server by TP-Link http://www.tp-link.com/en/products/details/?categoryid=232&model=TL-PS310U I just connect usb-storage to the print server and on windows I can access that usb-device from my windows computers. But how can I access that device from Mac? I found an article that I can add usb printer to mac using that device but not able to find how to access the storage device. please help me out!

    Read the article

  • JavaServer Faces 2.0 for the Cloud

    - by Janice J. Heiss
    A new article now up on otn/java by Deepak Vohra titled “JSF 2.0 for the Cloud, Part One,” shows how JavaServer Faces 2.0 provides features ideally suited for the virtualized computing resources of the cloud. The article focuses on @ManagedBean annotation, implicit navigation, and resource handling. Vohra illustrates how the container-based model found in Java EE 7, which allows portable applications to target single machines as well as large clusters, is well suited to the cloud architecture. From the article-- “Cloud services might not have been a factor when JavaServer Faces 2.0 (JSF 2.0) was developed, but JSF 2.0 provides features ideally suited for the cloud, for example:•    The path-based resource handling in JSF 2.0 makes handling virtualized resources much easier and provides scalability with composite components.•    REST-style GET requests and bookmarkable URLs in JSF 2.0 support the cloud architecture. Representational State Transfer (REST) software architecture is based on transferring the representation of resources identified by URIs. A RESTful resource or service is made available as a URI path. Resources can be accessed in various formats, such as XML, HTML, plain text, PDF, JPEG, and JSON, among others. REST offers the advantages of being simple, lightweight, and fast.•    Ajax support in JSF 2.0 is integrable with Software as a Service (SaaS) by providing interactive browser-based Web applications.” In Part Two of the series, Vohra will examine features such as Ajax support, view parameters, preemptive navigation, event handling, and bookmarkable URLs.Have a look at the article here.

    Read the article

  • Mapping drives on MacOSX (Leopard/Snow Leopard) permanently inside LAN

    - by Shyam
    Hi, How do I "map" shared folders on a Mac, permanently? With map, I do not mean 'connect', but permanently add it to the system so it exists after reboot. Since workstations tend to shutdown, I wonder also the symptoms and cures in case that happens. In Linux, this can be done using the fstab file, but I noticed that volumes are mounted in a different structure than Linux. I need this to backup some workstations, doing a recursive job over a single directory, that should contain the shared folders. I use Terminal to access the main system, so by preference, the solution would be nice that works within a bash shell vs GUI. I can access all folders in Finder. Thanks!

    Read the article

  • How to fix Jdeveloper 11.1.1.2 Hang

    - by nestor.reyes
    Is Jdeveloper hanging on you when use the XPATH expression Builder? Have a look at the Release notes for 11.1.1.2. This will relieve a lot of frustration.http://download.oracle.com/docs/cd/E15523_01/doc.1111/e14770/bpel.htm#BABECHBF16.1.6 Oracle JDeveloper May Hang When Using the Expression Builder Using the Expression Builder to build XPath expressions may cause Oracle JDeveloper to hang. If that happens, perform the following steps: Kill the Oracle JDeveloper process. Restart Oracle JDeveloper. Select Tools > Preferences > SOA, and deselect the Validate Expression checkbox. After performing these steps, Oracle JDeveloper should no longer hang.

    Read the article

  • Windows Server 2003 R2 Standard: Locks MS Office files, but not Adobe .AI and .PSD files?

    - by Bruce Garlock
    We have some shares setup on a Windows 2003 R2 server, and the MS Office files people save behave properly: The first person to open the file gets read/write, and the second person to open the file while the first person still has the file open, gets a read-only version. This is not true for the graphics files, like Adobe Illustrator .AI files, and Photoshop .PSD files. Anyone who goes to open these files has full read/write, even if someone else is already working on the file! This has lead to numerous file corruption issues, as well as other lost work, since it always saves the last changes to the file. How do we get Windows to properly lock these files so when someone is working on a file, and someone else wants to open one, they get read-only access? Many thanks, Bruce

    Read the article

  • MySQL Connect Conference: My Experience

    - by Hema Sridharan
    It was a great experience to attend the MySQL Connect Conference for the first time ever. Personally I was very much enthralled to present about "How to make MySQL Backups" besides attending different sessions to absorb more knowledge about the technical prospects of MySQL. One of the agenda items in my presentation was "MySQL Enterprise Backup" functionality and features. There were total of 40 attendees in the session, who were very much interested about the MySQL Enterprise Backup product and gave positive feedback as well as areas of improvements on our product. Some of our features brought lot of excitement and smile amongst our customers including,1. Performance improvements in MEB 3.8.02. Incremental Base option from MEB 3.7.1 where there is no need to specify the directory name of the previous backup to fetch the lsn values and instead can directly fetch from backup_history table using --incremental-base=history: last_backup3. only-innodb-with-frm option introduced in MEB 3.7 version. A true online hot backup of InnoDB tables.I also attended a session with similar topic "MEB Best Practices" conducted by Sanjay Manwani, where he double clicked all the features and best strategies of backup & restore. I also got an opportunity to attend other sessions including,* Enabling the new generation of web and cloud services with MySQL 5.6 replication* Getting the most out of MySQL with MySQL Workbench* InnoDB compression for OLTP* Scaling for the Web and Cloud with MySQL replication.Above all, had some special moments in the conference including meeting some of the executives / colleagues for the first time f2f. On a whole, the first MySQL Connect conference was a great success in terms of manifesting the features of our products, direct feedback from customer and team building.  We also had some applauding yahoo moments when Tomas Ulin announced different releases including MySQL 5.6 RC, Connector Python 1.0 and ODBC 5.2 release, MySQL Cluster 7.3, additions to MySQL Enterprise edition etc.

    Read the article

  • Bridged network on OS X only gets UDP broadcast traffic

    - by a paid nerd
    I've created a bridged network Mac OS X 10.8.5 using ifconfig and TUNTAP for OS X to bridge my wireless connection, en0, with a virtual interface, tap0, which I can use for guest VMs: $ sudo sysctl -w net.inet.ip.forwarding=1 $ sudo sysctl -w net.link.ether.inet.proxyall=1 $ sudo sysctl -w net.inet.ip.fw.enable=1 $ sudo ifconfig bridge0 create $ sudo ifconfig bridge0 addm en0 addm tap0 $ sudo ifconfig bridge0 up $ ifconfig en0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500 ether 28:cf:xx:xx:xx:xx inet6 xxxx::xxxx:xxxx:xxxx:xxxx%en0 prefixlen 64 scopeid 0x4 inet 192.168.100.64 netmask 0xffffff00 broadcast 192.168.100.1 media: autoselect status: active bridge0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500 ether ac:de:xx:xx:xx:xx Configuration: priority 0 hellotime 0 fwddelay 0 maxage 0 ipfilter disabled flags 0x2 member: en0 flags=3<LEARNING,DISCOVER> port 4 priority 0 path cost 0 member: tap0 flags=3<LEARNING,DISCOVER> port 8 priority 0 path cost 0 tap0: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500 ether ca:3d:xx:xx:xx:xx open (pid 88244) However, if I tcpdump -i tap0, I only see broadcast traffic. Shouldn't I see a mirror of everything on en0? (192.168.100.33, the host doing the broadcasting, is another unrelate, noisy server on my LAN.) (I asked a similar question here and will probably close it.)

    Read the article

  • Dynamic Bursting ... no really!

    - by Tim Dexter
    If any of you have seen me or my colleagues present BI Publisher to you then we have hopefully mentioned 'bursting.' You may have even seen a demo where we talk about being able to take a batch of data, say invoices. Then split them by some criteria, say customer id; format them with a template; generate the output and then deliver the documents to the recipients with a click. We and especially I, always say this can be completely dynamic! By this I mean, that you could store customer preferences in a database. What layout would each customer like; what output format they would like and how they would like the document delivered. We (I) talk a good talk, but typically don't do the walk in a demo. We hard code everything in the bursting query or bursting control file to get the concept across. But no more peeps! I have finally put together a dynamic bursting demo! Its been minutes in the making but its been tough to find those minutes! Read on ... It's nothing amazing in terms of making the burst dynamic. I created a CUSTOMER_PREFS table with some simple UI in an APEX application so that I can maintain their requirements. In EBS you have descriptive flexfields that could do the same thing or probably even 'contact' fields to store most of the info. Here's my table structure: Name                           Type ------------------------------ -------- CUSTOMER_ID                    NUMBER(6) TEMPLATE_TYPE                  VARCHAR2(20) TEMPLATE_NAME                  VARCHAR2(120) OUTPUT_FORMAT                  VARCHAR2(20) DELIVERY_CHANNEL               VARCHAR2(50) EMAIL                          VARCHAR2(255) FAX                            VARCHAR2(20) ATTACH                         VARCHAR2(20) FILE_LOC                       VARCHAR2(255) Simple enough right? Just need CUSTOMER_ID as the key for the bursting engine to join it to the customer data at burst time. I have not covered the full delivery options, just email, fax and file location. Remember, its a demo people :0) However the principal is exactly the same for each delivery type. They each have a set of attributes that need to be provided and you will need to handle that in your bursting query. On a side note, in EBS, you use a bursting control file, you can apply the same principals that I'm laying out here you just need to get the customer bursting info into the XML data stream so that you can refer to it in the control file using XPATH expressions. Next, we need to look up what attributes or parameters are required for each delivery method. that can be found in the documentation here.  Now we know the combinations of parameters and delivery methods we can construct the query using a series a decode statements: select distinct cp.customer_id "KEY", cp.template_name TEMPLATE, cp.template_type TEMPLATE_FORMAT, 'en-US' LOCALE, cp.output_format OUTPUT_FORMAT, 'false' SAVE_FORMAT, cp.delivery_channel DEL_CHANNEL, decode(cp.delivery_channel,'FILE', cp.file_loc , 'EMAIL', cp.email , 'FAX', cp.fax) PARAMETER1, decode(cp.delivery_channel,'FILE', c.cust_last_name||'_orders.pdf' ,'EMAIL','[email protected]' ,'FAX', 'faxserver.com') PARAMETER2, decode(cp.delivery_channel,'FILE',NULL ,'EMAIL','[email protected]' ,'FAX', null) PARAMETER3, decode(cp.delivery_channel,'FILE',NULL ,'EMAIL','Your current orders' ,'FAX',NULL) PARAMETER4, decode(cp.delivery_channel,'FILE',NULL ,'EMAIL','Please find attached a copy of your current orders with BI Publisher, Inc' ,'FAX',NULL) PARAMETER5, decode(cp.delivery_channel,'FILE',NULL ,'EMAIL','false' ,'FAX',NULL) PARAMETER6, decode(cp.delivery_channel,'FILE',NULL ,'EMAIL','[email protected]' ,'FAX',NULL) PARAMETER7 from cust_prefs cp, customers c, orders_view ov where cp.customer_id = c.customer_id and cp.customer_id = ov.customer_id order by cp.customer_id Pretty straightforward, just need to test, test, test, the query and ensure it's bringing back the correct data based on each customers preferences. Notice the NULL values for parameters that are not relevant for a given delivery channel. You should end up with bursting control data that the bursting engine can use:  Now, your users can run the burst and documents will be formatted, generated and delivered based on the customer prefs. If you're interested in the example, I have used the sample OE schema data for the base report. The report files and CUST_PREFS table are zipped up here. The zip contains the data model (.xdmz), the report and templates (.xdoz) and the sql scripts to create and load data to the CUST_PREFS table.  Once you load the report into the catalog, you'll need to create the OE data connection and point the data model at it. You'll probably need to re-point the report to the data model too. Happy Bursting!

    Read the article

  • Practical Approaches to increasing Virtualization Density-Part 1

    - by Girish Venkat
    Happy New year everyone!. Let me kick start the year off by talking about Virtualization density.  What is it?The number of virtual servers that a physical server can support and it's increase from the prior physical infrastructure as a percentage. Why is it important?This is important because the density should be indicative of how well the server is getting consumed?So what is wrong ?Virtualization density fails to convey the "Real usage" of a server.  Most of the hypervisor based O/S Virtualization  evangelists take pride in the fact that they are now running a Virtual Server farm of X machines compared to a Physical server farm of Y (with Y less than X obviously). The real question is - has your utilization of the server really increased or not.  In an internal study that was conducted by one of the top financial institution - the utilization of servers only went up by 15% from 30 to 45. So, this really means that just by increasing virtualization density one will not be achieving the goal of using up the servers in their server farm better.  I will write about what the possible approaches are to increase virtualization density in the next entry. 

    Read the article

  • Iterative and Incremental Principle Series 3: The Implementation Plan (a.k.a The Fitness Plan)

    - by llowitz
    Welcome back to the Iterative and Incremental Blog series.  Yesterday, I demonstrated how shorter interval sets allowed me to focus on my fitness goals and achieve success.  Likewise, in a project setting, shorter milestones allow the project team to maintain focus and experience a sense of accomplishment throughout the project lifecycle.  Today, I will discuss project planning and how to effectively plan your iterations. Admittedly, there is more to applying the iterative and incremental principle than breaking long durations into multiple, shorter ones.  In order to effectively apply the iterative and incremental approach, one should start by creating an implementation plan.   In a project setting, the Implementation Plan is a high level plan that focuses on milestones, objectives, and the number of iterations.  It is the plan that is typically developed at the start of an engagement identifying the project phases and milestones.  When the iterative and incremental principle is applied, the Implementation Plan also identified the number of iterations planned for each phase.  The implementation plan does not include the detailed plan for the iterations, as this detail is determined prior to each iteration start during Iteration Planning.  An individual iteration plan is created for each project iteration. For my fitness regime, I also created an “Implementation Plan” for my weekly exercise.   My high level plan included exercising 6 days a week, and since I cross train, trying not to repeat the same exercise two days in a row.  Because running on the hills outside is the most difficult and consequently, the most effective exercise, my implementation plan includes running outside at least 2 times a week.   Regardless of the exercise selected, I always apply a series of 6-minute interval sets.  I never plan what I will do each day in advance because there are too many changing factors that need to be considered before that level of detail is determined.  If my Implementation Plan included details on the exercise I was to perform each day of the week, it is quite certain that I would be unable to follow my plan to that level.  It is unrealistic to plan each day of the week without considering the unique circumstances at that time.  For example, what is the weather?  Are there are conflicting schedule commitments?  Are there injuries that need to be considered?  Likewise, in a project setting, it is best to plan for the iteration details prior to its start. Join me for tomorrow’s blog where I will discuss when and how to plan the details of your iterations.

    Read the article

  • FairWarning Privacy Monitoring Solutions Rely on MySQL to Secure Patient Data

    - by Rebecca Hansen
    FairWarning® solutions have audited well over 120 billion events, each of which was processed and stored in a MySQL database. FairWarning is the world's leading supplier of privacy monitoring solutions for electronic health records, relied on by over 1,200 Hospitals and 5,000 Clinics to keep their patients' data safe. In January 2014, FairWarning was awarded the highest commendation in healthcare IT as the first ever Category Leader for Patient Privacy Monitoring in the "2013 Best in KLAS: Software & Services" report[1]. FairWarning has used MySQL as their solutions’ database from their start in 2005 to worldwide expansion and market leadership. FairWarning recently migrated their solutions from MyISAM to InnoDB and updated from MySQL 5.5 to 5.6. Following are some of benefits they’ve had as a result of those changes and reasons for their continued reliance on MySQL (from FairWarning MySQL Case Study). Scalability to Handle Terabytes of Data FairWarning's customers have a lot of data: On average, FairWarning customers receive over 700,000 events to be processed daily. Over 25% of their customers receive over 30 million events per day, which equates to over 1 billion events and nearly one terabyte (TB) of new data each month. Databases range in size from a few hundred GBs to 10+ TBs for enterprise deployments (data are rolled off after 13 months). Low or Zero Admin = Few DBAs "MySQL has not required a lot of administration. After it's been tuned, configured, and optimized for size on initial setup, we have very low administrative costs. I can scale and add more customers without adding DBAs. This has had a big, positive impact on our business.” - Chris Arnold, FairWarning Vice President of Product Management and Engineering. Performance Schema  As the size of FairWarning's customers has increased, so have their tables and data volumes. MySQL 5.6’ new maintenance and management features have helped FairWarning keep up. In particular, MySQL 5.6 performance schema’s low-level metrics have provided critical insight into how the system is performing and why. Support for Mutli-CPU Threads MySQL 5.6' support for multiple concurrent CPU threads, and FairWarning's custom data loader allow multiple files to load into a single table simultaneously vs. one at a time. As a result, their data load time has been reduced by 500%. MySQL Enterprise Hot Backup Because hospitals and clinics never stop, FairWarning solutions can’t either. FairWarning changed from using mysqldump to MySQL Enterprise Hot Backup, which has reduced downtime, restore time, and storage requirements. For many of their larger customers, restore time has decreased by 80%. MySQL Enterprise Edition and Product Roadmap Provide Complete Solution "MySQL's product roadmap fully addresses our needs. We like the fact that MySQL Enterprise Edition has everything included; there's no need to purchase separate modules."  - Chris Arnold Learn More>> FairWarning MySQL Case Study Why MySQL 5.6 is an Even Better Embedded Database for Your Products presentation Updating Your Products to MySQL 5.6, Best Practices for OEMs on-demand webinar (audio and / or slides + Q&A transcript) MyISAM to InnoDB – Why and How on-demand webinar (same stuff) Top 10 Reasons to Use MySQL as an Embedded Database white paper [1] 2013 Best in KLAS: Software & Services report, January, 2014. © 2014 KLAS Enterprises, LLC. All rights reserved.

    Read the article

  • MySQL Cluster 7.3 - Join This Week's Webinar to Learn What's New

    - by Mat Keep
    The first Development Milestone and Early Access releases of MySQL Cluster 7.3 were announced just several weeks ago. To provide more detail and demonstrate the new features, Andrew Morgan and I will be hosting a live webinar this coming Thursday 25th October at 0900 Pacific Time / 16.00 UTC Even if you can't make the live webinar, it is still worth registering for the event as you will receive a notification when the replay will be available, to view on-demand at your convenience In the webinar, we will discuss the enhancements being previewed as part of MySQL Cluster 7.3, including: - Foreign Key Constraints: Yes, we've looked into the future and decided Foreign Keys are it ;-) You can read more about the implementation of Foreign Keys in MySQL Cluster 7.3 here - Node.js NoSQL API: Allowing web, mobile and cloud services to query and receive results sets from MySQL Cluster, natively in JavaScript, enables developers to seamlessly couple high performance, distributed applications with a high performance, distributed, persistence layer delivering 99.999% availability. You can study the Node.js / MySQL Cluster tutorial here - Auto-Installer: This new web-based GUI makes it simple for DevOps teams to quickly configure and provision highly optimized MySQL Cluster deployments on-premise or in the cloud You can view a YouTube tutorial on the MySQL Cluster Auto-Installer here  So we have a lot to cover in our 45 minute session. It will be time well spent if you want to know more about the future direction of MySQL Cluster and how it can help you innovate faster, with greater simplicity. Registration is open 

    Read the article

  • Linux does not communicate to Windows subinterface

    - by artaxerxe
    I set my NIC on Windows so that I have two interfaces: one (the first one) has IP 192.168.0.5 the other one has IP 10.10.10.1. On a Linux machine I set an interface to 10.10.10.2. On another Linux machine I set the interface to 10.10.10.3. And tried to ping those machines. Here is the result. Linux to Linux is ok. Windows to Linux also is ok. But Linux to Windows does not work. Can you help me on getting the communication between Linux and Windows? What should I do for this? I have to mention that those machines are connected through a switch. If you need any details, ask me please! Thanks in advance!

    Read the article

  • Podcast Show Notes: William Ulrich and Neal McWhorter on Business Architecture

    - by Bob Rhubart
    The latest ArchBeat podcast program features a four-part conversation with William Ulrich and Neal McWhorter, the authors of Business Architecture: The Art and Practice of Business Transformation, available from Meghan-Kiffer Press. Listen to Part 1 Bill and Neal cover the basics and discuss the effects of the lack of business architecture on organizations. Listen to Part 2 (Jan 19) What really happens to the billions of dollars annually invested in IT. Listen to Part 3 (Jan 26) Why the IT and business sides of many organizations can’t play nice. Listen to Part 4 (Feb 2) How IT architects and business architects can work together to get the ship back on course and keep it there. Connect William Ulrich Website | LinkedIn | Business Architecture Guild Neal McWhorter Website | LinkedIn | Business Architecture Group on OMG Coming Soon Bob Hensle, Director, Oracle Enterprise Architecture Group, discusses the recently launched IT Solutions from Oracle (ITSO) library of documents. Excerpts from a recent OTN Architect Community Virtual Meet-up. Stay tuned: RSS del.icio.us Tags: business architecture,enterprise architecture,arch2arch,archbeat,podcast,business transformation,oracle,oracle technology network Technorati Tags: business architecture,enterprise architecture,arch2arch,archbeat,podcast,business transformation,oracle,oracle technology network

    Read the article

  • How can I setup apache+mod_proxy so when I connect to mod_proxy on interface X, it sends the traffic

    - by aspitzer
    We use a service that allots us X number of requests per IP and has allows us to setup 5 IPs with such a limit (I know.. it seems stupid they could not just up the limit 5x on one IP). Pretend I have a linux box with the following address on the internet: 66.249.90.104 - that is an Google IP and not mine... so feel free to try to hack into it :) I setup apache+mod_proxy as a forwarding proxy (ProxyRequests On). i.e. you can setup firefox to use 66.249.90.104:8080 as a proxy, and all firefox traffic comes out as 66.249.90.104. So far so good. Problem: Now I add more alias interfaces so the total looks like this: eth0: 66.249.90.104 eth0:1 66.249.90.105 eth0:2 66.249.90.106 eth0:3 66.249.90.107 eth0:4 66.249.90.108 I run apache+mod_proxy (single apache instance) which binds to all interfaces, but no matter which address I connect to use the forwarding proxy, all traffic goes out to the internet as 66.249.90.104 I have also tried running 5 different apaches, each binding to its own interface only, but that still sends the outbound request through 66.249.90.104. I was hoping to get it to work as follows: I connect to 66.249.90.108 and make a proxy request, and it goes out as 66.249.90.108. I connect to 66.249.90.107 and make a proxy request, and it goes out as 66.249.90.107. etc. Has anyone else had to deal with this issue? The fall back solution would be to just run apache on 5 separate boxes, but I would prefer it to all work on one box. Thanks!

    Read the article

  • Webcast Replay Available: E-Business Suite Release 12.1 Upgrade Best Practices - Technical Insight

    - by BillSawyer
    I am pleased to release the replay and presentation for the latest ATG Live Webcast: E-Business Suite Release 12.1 Upgrade Best Practices - Technical Insight (Presentation)Udayan Parvate, Director, E-Business Suite Release Engineering and Uday Moogala, Senior Principal Engineer, Applications Performance discussed the best practices that you can apply when upgrading your E-Business Suite instance to Release 12.1 and beyond. They discussed upgrade paths, resources, and practices to minimize downtime during the upgrade. (April 2012)Finding other recorded ATG webcastsThe catalog of ATG Live Webcast replays, presentations, and all ATG training materials is available in this blog's Webcasts and Training section.

    Read the article

  • SAS Expanders vs Direct Attached (SAS)?

    - by jemmille
    I have a storage unit with 2 backplanes. One backplane holds 24 disks, one backplane holds 12 disks. Each backplane is independently connected to a SFF-8087 port (4 channel/12Gbit) to the raid card. Here is where my question really comes in. Can or how easily can a backplane be overloaded? All the disks in the machine are WD RE4 WD1003FBYX (black) drives that have average writes at 115MB/sec and average read of 125 MB/sec I know things would vary based on the raid or filesystem we put on top of that but it seems to be that a 24 disk backplane with only one SFF-8087 connector should be able to overload the bus to a point that might actually slow it down? Based on my math, if I had a RAID0 across all 24 disks and asked for a large file, I should, in theory should get 24*115 MB/sec wich translates to 22.08 GBit/sec of total throughput. Either I'm confused or this backplane is horribly designed, at least in a perfomance environment. I'm looking at switching to a model where each drive has it's own channel from the backplane (and new HBA's or raid card). EDIT: more details We have used both pure linux (centos), open solaris, software raid, hardware raid, EXT3/4, ZFS. Here are some examples using bonnie++ 4 Disk RAID-0, ZFS WRITE CPU RE-WRITE CPU READ CPU RND-SEEKS 194MB/s 19% 92MB/s 11% 200MB/s 8% 310/sec 194MB/s 19% 93MB/s 11% 201MB/s 8% 312/sec --------- ---- --------- ---- --------- ---- --------- 389MB/s 19% 186MB/s 11% 402MB/s 8% 311/sec 8 Disk RAID-0, ZFS WRITE CPU RE-WRITE CPU READ CPU RND-SEEKS 324MB/s 32% 164MB/s 19% 346MB/s 13% 466/sec 324MB/s 32% 164MB/s 19% 348MB/s 14% 465/sec --------- ---- --------- ---- --------- ---- --------- 648MB/s 32% 328MB/s 19% 694MB/s 13% 465/sec 12 Disk RAID-0, ZFS WRITE CPU RE-WRITE CPU READ CPU RND-SEEKS 377MB/s 38% 191MB/s 22% 429MB/s 17% 537/sec 376MB/s 38% 191MB/s 22% 427MB/s 17% 546/sec --------- ---- --------- ---- --------- ---- --------- 753MB/s 38% 382MB/s 22% 857MB/s 17% 541/sec Now 16 Disk RAID-0, it's gets interesting WRITE CPU RE-WRITE CPU READ CPU RND-SEEKS 359MB/s 34% 186MB/s 22% 407MB/s 18% 1397/sec 358MB/s 33% 186MB/s 22% 407MB/s 18% 1340/sec --------- ---- --------- ---- --------- ---- --------- 717MB/s 33% 373MB/s 22% 814MB/s 18% 1368/sec 20 Disk RAID-0, ZFS WRITE CPU RE-WRITE CPU READ CPU RND-SEEKS 371MB/s 37% 188MB/s 22% 450MB/s 19% 775/sec 370MB/s 37% 188MB/s 22% 447MB/s 19% 797/sec --------- ---- --------- ---- --------- ---- --------- 741MB/s 37% 376MB/s 22% 898MB/s 19% 786/sec 24 Disk RAID-1, ZFS WRITE CPU RE-WRITE CPU READ CPU RND-SEEKS 347MB/s 34% 193MB/s 22% 447MB/s 19% 907/sec 347MB/s 34% 192MB/s 23% 446MB/s 19% 933/sec --------- ---- --------- ---- --------- ---- --------- 694MB/s 34% 386MB/s 22% 894MB/s 19% 920/sec 28 Disk RAID-0, ZFS 32 Disk RAID-0, ZFS 36 Disk RAID-0, ZFS More details: Here is the exact unit: http://www.supermicro.com/products/chassis/4U/847/SC847E1-R1400U.cfm

    Read the article

  • Identical traffic

    - by Walter White
    Hi all, I am running an application server and logging all requests for analysis purposes later. One interesting trend I noticed last night was, I had a visitor from Texas on FIOS share identical traffic with bluecoat in California. What would cause the traffic to be identical? For every request the visitor made, bluecoat made one subsequently within milliseconds of his request. If it is caching, why would there be identical requests? Wouldn't it go through the cache / proxy on their end, and I would only see the proxied request? I'm just curious, this is an interesting pattern that shows similarities of a DDoS attack, but with far fewer resources. Is it possible that the visitor had malware on their computer? Any other ideas? Walter

    Read the article

< Previous Page | 453 454 455 456 457 458 459 460 461 462 463 464  | Next Page >