Search Results

Search found 35943 results on 1438 pages for 'database systems'.

Page 491/1438 | < Previous Page | 487 488 489 490 491 492 493 494 495 496 497 498  | Next Page >

  • Ameristar Wins with Oracle GoldenGate’s Heterogeneous Real-Time Data Integration

    - by Irem Radzik
    Today we announced a press release about another successful project with Oracle GoldenGate. This time at Ameristar. Ameristar is a casino gaming company and needed a single data integration solution to connect multiple heterogeneous systems to its Teradata data warehouse. The project involves integration of Ameristar’s promotional and gaming data from 14 data sources across its 7 casino hotel properties in real time into a central Teradata data warehouse. The source systems include the Aristocrat gaming and MGT promotional management platforms running on Microsoft SQL Server 2000 databases. As you can notice, there was no Oracle Database involved in this project, but Ameristar’s IT leadership knew that  GoldenGate’s strong heterogeneous and real-time data integration capabilities is the right technology for their data warehousing project. With GoldenGate Ameristar was able to reduce data latency to the enterprise data warehouse, and use this real-time customer information for marketing teams in improving overall customer experience. Ameristar customers receive more targeted and timely campaign offers, and the company has more up-to-date visibility into financial metrics of the company. One other key benefit the company experienced with GoldenGate is in operational costs. The previous data capture solution Ameristar used was trigger based and required a lot of effort to manage. They needed dedicated IT staff to maintain it. With GoldenGate, the solution runs seamlessly without needing a fully-dedicated staff, giving the IT team at Ameristar more resources for their other IT projects. If you want to learn more about GoldenGate and the latest features for Oracle Database and non-Oracle databases, please watch our on demand webcast about Oracle GoldenGate 11g Release 2.

    Read the article

  • Big Data for Retail

    - by David Dorf
    Right up there with mobile, social, and cloud is the term "big data," which seems to be popping up lots in the press these days.  Companies like Google, Yahoo, and Facebook have popularized a new class of data technologies meant to solve the problem of processing large amounts of data quickly.  I first mentioned this in a posting back in March 2009.  Put simply, big data implies datasets so large they can't normally be processed using a standard transactional database.  The term "noSQL" is often used in this context as well. Actually, using parallel processing within the Oracle database combined with Exadata can achieve impressive results.  Look for more from Oracle at OpenWorld as hinted by Jean-Pierre Dijcks. McKinsey recently released a report on big data in which retail was specifically mentioned as an industry that can benefit from the new technologies.  I won't rehash that report because my friend Rama already did such a good job in his posting, Impact of "Big Data" on Retail. The presentation below does a pretty good job of framing the problem, although it doesn't really get into the available technologies (e.g. Exadata, Hadoop, Cassandra, etc.) and isn't retail specific. Determine the Right Analytic Database: A Survey of New Data Technologies So when a retailer asks me about big data, here's what I say:  Big data refers to a set of technologies for processing large volumes of structured and unstructured data.  Imagine collecting everything uttered by your customers on Facebook and Twitter and combining it with all the data you can find about the products you sell (e.g. reviews, images, demonstration videos), including competitive data.  Assuming you could process all that data, you could then personalize offers to specific customers based on their tastes, ensure prices are competitive, and implement better local assortments.  It's really not that far off.

    Read the article

  • San Joaquin County, California Wins AIIM 2012 Carl E. Nelson Best Practice Award

    - by Peggy Chen
    Last month, AIIM, the global community of information professionals, announced the winners of the 2012 Carl E. Nelson Best Practices Awards. And San Joaquin County, California won in the small company category for 1-100 employees. The Carl E. Nelson Best Practices Award was established to recognize excellence in the area of information management. "Best practice" denotes a standard of excellence that has been achieved with an organization and refers to a process that can be quantified, adapted and repeated. Like many counties, San Joaquin County, California, was faced with huge challenges due to decreasing funds and staff, including decreased cost of building capability. It needed to streamline processes, cut costs per activity, modernize and strengthen the infrastructure, and adopt new technology and standards such as the National Information Exchange Model (NIEM). The Integrated Justice Information System (IJIS) provides a Web-based system to link more than 650,000 residents, 18 agencies countywide and other law enforcement systems nationwide. The county’s modernization initiative focused on replacing its outdated warrant system, implementing service-oriented architecture (SOA) to simplify integration between county law and justice systems, deploying Business Process Management (BPM), Case Management with content management, and Web technologies from Oracle. A critical part of their success has been the proper alignment of our Strategic Vision to the way the organization was enabled to plan and execute (and continues to execute) their modernization project. Congratulations to San Joaquin County!

    Read the article

  • How to factor out data layer in nopCommerce and replace MS SQL with RavenDB?

    - by Kaveh Shahbazian
    I am new to nopCommerce and ecommerce in general but I am involved in an ecommerce project. Now from my past experiences with RavenDB (which mostly were absolutely pleasant) and based on the needs of the business (fast changes with awkward business workflows) It seemed to be an appealing option to have RavenDB handling all sort of things related to the database. I do not understand design and architecture of nopCommerce fully so I did not reach to a conclusion on how to factor data parts, since it seems the services layer actually does not abstract data-layer concepts away; like bringing in EF working model to other layers. I have found another project which used NuDB as it's database as a nopCommerce fork. But it did not help because NuDB still has the feeling of a RDBMS and is not as different as RavenDB. Now first how can I learn about the internals of nopCommerce (other than investigating the code)? It's workflows? It's conventions? Second has anyone tried something similar before with a NoSQL database (say like MongoDB or RavenDB)? Is it possible to achieve this in a 1 (~2) month time frame? Thanks in advance;

    Read the article

  • SQL SERVER – Download Microsoft SQL Server Compact 4.0 SP1

    - by pinaldave
    Microsoft SQL Server Compact 4.0 is a free, embedded database that software developers can use for building ASP.NET websites and Windows desktop applications. SQL Server Compact 4.0 is the default database for Microsoft WebMatrix. For enhanced development and debugging capabilities, including designer support, Visual Studio can be used to develop ASP.NET web applications and websites using SQL Server Compact 4.0. Enabled to work in the medium or partial trust environments in the web servers, and can be easily deployed along with the website to the third party website hosting service providers. SQL Server CE 4.0 also provides stronger data security with the use of the SHA2 encryption algorithms for encrypting the databases. Latest version also supports T-SQL syntax enhancement by adding support for OFFSET and FETCH that can be used to write paging queries. Used with ADO.NET Entity Framework, SQL Server Compact now supports the columns that have server generated keys like identity, rowguid etc. and the code-first programming model. SQL Server Compact 4.0 is freely redistributable under a redistribution license agreement. SQL Server Compact 3.5 and SQL Server Compact 4.0 can be installed and work side by side on a desktop. Download Microsoft SQL Server Compact 4.0 SP1 Here are my earlier article on SQL Server CE Difference Between SQL Server Compact Edition (CE) and SQL Server Express Edition SQL SERVER – CE – 3 Links to Performance Tuning Compact Edition SQL SERVER – CE – List of Information_Schema System Tables SQL SERVER – Server Side Paging in SQL Server CE (Compact Edition) SQL SERVER – CE – Samples Database for SQL CE 4.0 Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • HTG Explains: Do Non-Windows Platforms Like Mac, Android, iOS, and Linux Get Viruses?

    - by Chris Hoffman
    Viruses and other types of malware seem largely confined to Windows in the real world. Even on a Windows 8 PC, you can still get infected with malware. But how vulnerable are other operating systems to malware? When we say “viruses,” we’re actually talking about malware in general. There’s more to malware than just viruses, although the word virus is often used to talk about malware in general. Why Are All the Viruses For Windows? Not all of the malware out there is for Windows, but most of it is. We’ve tried to cover why Windows has the most viruses in the past. Windows’ popularity is definitely a big factor, but there are other reasons, too. Historically, Windows was never designed for security in the way that UNIX-like platforms were — and every popular operating system that’s not Windows is based on UNIX. Windows also has a culture of installing software by searching the web and downloading it from websites, whereas other platforms have app stores and Linux has centralized software installation from a secure source in the form of its package managers. Do Macs Get Viruses? The vast majority of malware is designed for Windows systems and Macs don’t get Windows malware. While Mac malware is much more rare, Macs are definitely not immune to malware. They can be infected by malware written specifically for Macs, and such malware does exist. At one point, over 650,000 Macs were infected with the Flashback Trojan. [Source] It infected Macs through the Java browser plugin, which is a security nightmare on every platform. Macs no longer include Java by default. Apple also has locked down Macs in other ways. Three things in particular help: Mac App Store: Rather than getting desktop programs from the web and possibly downloading malware, as inexperienced users might on Windows, they can get their applications from a secure place. It’s similar to a smartphone app store or even a Linux package manager. Gatekeeper: Current releases of Mac OS X use Gatekeeper, which only allows programs to run if they’re signed by an approved developer or if they’re from the Mac App Store. This can be disabled by geeks who need to run unsigned software, but it acts as additional protection for typical users. XProtect: Macs also have a built-in technology known as XProtect, or File Quarantine. This feature acts as a blacklist, preventing known-malicious programs from running. It functions similarly to Windows antivirus programs, but works in the background and checks applications you download. Mac malware isn’t coming out nearly as quick as Windows malware, so it’s easier for Apple to keep up. Macs are certainly not immune to all malware, and someone going out of their way to download pirated applications and disable security features may find themselves infected. But Macs are much less at risk of malware in the real world. Android is Vulnerable to Malware, Right? Android malware does exist and companies that produce Android security software would love to sell you their Android antivirus apps. But that isn’t the full picture. By default, Android devices are configured to only install apps from Google Play. They also benefit from antimalware scanning — Google Play itself scans apps for malware. You could disable this protection and go outside Google Play, getting apps from elsewhere (“sideloading”). Google will still help you if you do this, asking if you want to scan your sideloaded apps for malware when you try to install them. In China, where many, many Android devices are in use, there is no Google Play Store. Chinese Android users don’t benefit from Google’s antimalware scanning and have to get their apps from third-party app stores, which may contain infected copies of apps. The majority of Android malware comes from outside Google Play. The scary malware statistics you see primarily include users who get apps from outside Google Play, whether it’s pirating infected apps or acquiring them from untrustworthy app stores. As long as you get your apps from Google Play — or even another secure source, like the Amazon App Store — your Android phone or tablet should be secure. What About iPads and iPhones? Apple’s iOS operating system, used on its iPads, iPhones, and iPod Touches, is more locked down than even Macs and Android devices. iPad and iPhone users are forced to get their apps from Apple’s App Store. Apple is more demanding of developers than Google is — while anyone can upload an app to Google Play and have it available instantly while Google does some automated scanning, getting an app onto Apple’s App Store involves a manual review of that app by an Apple employee. The locked-down environment makes it much more difficult for malware to exist. Even if a malicious application could be installed, it wouldn’t be able to monitor what you typed into your browser and capture your online-banking information without exploiting a deeper system vulnerability. Of course, iOS devices aren’t perfect either. Researchers have proven it’s possible to create malicious apps and sneak them past the app store review process. [Source] However, if a malicious app was discovered, Apple could pull it from the store and immediately uninstall it from all devices. Google and Microsoft have this same ability with Android’s Google Play and Windows Store for new Windows 8-style apps. Does Linux Get Viruses? Malware authors don’t tend to target Linux desktops, as so few average users use them. Linux desktop users are more likely to be geeks that won’t fall for obvious tricks. As with Macs, Linux users get most of their programs from a single place — the package manager — rather than downloading them from websites. Linux also can’t run Windows software natively, so Windows viruses just can’t run. Linux desktop malware is extremely rare, but it does exist. The recent “Hand of Thief” Trojan supports a variety of Linux distributions and desktop environments, running in the background and stealing online banking information. It doesn’t have a good way if infecting Linux systems, though — you’d have to download it from a website or receive it as an email attachment and run the Trojan. [Source] This just confirms how important it is to only run trusted software on any platform, even supposedly secure ones. What About Chromebooks? Chromebooks are locked down laptops that only run the Chrome web browser and some bits around it. We’re not really aware of any form of Chrome OS malware. A Chromebook’s sandbox helps protect it against malware, but it also helps that Chromebooks aren’t very common yet. It would still be possible to infect a Chromebook, if only by tricking a user into installing a malicious browser extension from outside the Chrome web store. The malicious browser extension could run in the background, steal your passwords and online banking credentials, and send it over the web. Such malware could even run on Windows, Mac, and Linux versions of Chrome, but it would appear in the Extensions list, would require the appropriate permissions, and you’d have to agree to install it manually. And Windows RT? Microsoft’s Windows RT only runs desktop programs written by Microsoft. Users can only install “Windows 8-style apps” from the Windows Store. This means that Windows RT devices are as locked down as an iPad — an attacker would have to get a malicious app into the store and trick users into installing it or possibly find a security vulnerability that allowed them to bypass the protection. Malware is definitely at its worst on Windows. This would probably be true even if Windows had a shining security record and a history of being as secure as other operating systems, but you can definitely avoid a lot of malware just by not using Windows. Of course, no platform is a perfect malware-free environment. You should exercise some basic precautions everywhere. Even if malware was eliminated, we’d have to deal with social-engineering attacks like phishing emails asking for credit card numbers. Image Credit: stuartpilbrow on Flickr, Kansir on Flickr     

    Read the article

  • Into Orbit (OBIEE 11g Launch)

    - by Darryn.Hinett
    After much anticipation, it appears that OBIEE 11g is about to hit the streets. Join Charles Phillips, President, and Thomas Kurian, Executive Vice President, Product Development, for the launch of the latest release of Oracle's business intelligence software. Be the first to hear about Oracle Business Intelligence Enterprise Edition 11g, the new, industry-leading technology platform for business intelligence, which offers: A powerful end-user experience with rich visualisation, search, and actionable collaboration Advancements in analytics, OLAP, and enterprise reporting, with unmatched performance and scalability Simplified system configuration, life-cycle management, and performance optimisation As well as the keynote and technical general session, break out sessions will cover the following topics: Business Intelligence: From Insight to Action In this session, you will learn about an exciting, industry-first innovation that connects business intelligence directly to your business processes. You can spot an opportunity or issue, and immediately initiate appropriate action directly from your dashboard. Oracle Business Intelligence Enterprise Edition 11g Systems Management and Deployment Learn how you can streamline the process of configuring your system, provisioning users, and monitoring and optimising query performance. Attend this session to hear how new integration with Oracle Enterprise Manager provides unique systems management, superior scalability, and high availability and security benefits, while making upgrades effortless. Extending Business Intelligence Analytics with Online Analytical Processing (OLAP) Learn how you can enhance the analytical power and business value of your BI solution with a unified environment for navigating and querying both OLAP and relational data sources. This session will focus on how Oracle Business Intelligence Enterprise Edition 11g, used with Oracle Essbase, can deliver insight at the speed of thought. Integrated Performance Management If your organisation is using or considering performance management applications such as Oracle's Hyperion Planning and Hyperion Financial Management, you will not want to miss this session. See how you can leverage Oracle's BI solution for accessing performance management applications and performing extended financial reporting and analysis. Visualisation and End-user Experience The latest release of Oracle Business Intelligence provides an unrivaled end user experience, including rich interactive dashboards, a vast range of animated charting options, integrated search, and more. This session will also include a close look at how you can leverage location data to visualise geo-spatial information.

    Read the article

  • OWB 11gR2 - Windows and Linux 64-bit clients available

    - by David Allan
    In addition to the integrated release of OWB in the 11.2.0.3 Oracle database distribution, the following 64-bit standalone clients are now available for download from Oracle Support. OWB 11.2.0.3 Standalone client for Windows 64-bit - 13365470 OWB 11.2.0.3 Standalone client for Linux X86 64-bit - 13366327 This is in addition to the previously released 32-bit client on Windows. OWB 11.2.0.3 Standalone client for Windows 32-bit - 13365457 The support document Major OWB 11.2.0.3 New Features Summary has details for OWB 11.2.0.3 which include the following. Exadata v2 and oracle Database 11gR2 support capabilities; Support for Oracle Database 11gR2 and Exadata compression types Even more partitioning: Range-Range, Composite Hash/List, System, Reference Transparent Data Encryption support Data Guard support/certification Compiled PL/SQL code generation Capabilities to support data warehouse ETL best practices; Read and write Oracle Data Pump files with external tables External table preprocessor Partition specific DML Bulk data movement code templates: Oracle, IBM DB2, Microsoft SQL Server to Oracle Integration with Fusion Middleware capabilities; Support OWB's Control Center Agent on WLS Lots of interesting capabilities in 11.2.0.3 and the availability of the 64-bit client I'm sure is welcome news for many!

    Read the article

  • Optimizing Solaris 11 SHA-1 on Intel Processors

    - by danx
    SHA-1 is a "hash" or "digest" operation that produces a 160 bit (20 byte) checksum value on arbitrary data, such as a file. It is intended to uniquely identify text and to verify it hasn't been modified. Max Locktyukhin and others at Intel have improved the performance of the SHA-1 digest algorithm using multiple techniques. This code has been incorporated into Solaris 11 and is available in the Solaris Crypto Framework via the libmd(3LIB), the industry-standard libpkcs11(3LIB) library, and Solaris kernel module sha1. The optimized code is used automatically on systems with a x86 CPU supporting SSSE3 (Intel Supplemental SSSE3). Intel microprocessor architectures that support SSSE3 include Nehalem, Westmere, Sandy Bridge microprocessor families. Further optimizations are available for microprocessors that support AVX (such as Sandy Bridge). Although SHA-1 is considered obsolete because of weaknesses found in the SHA-1 algorithm—NIST recommends using at least SHA-256, SHA-1 is still widely used and will be with us for awhile more. Collisions (the same SHA-1 result for two different inputs) can be found with moderate effort. SHA-1 is used heavily though in SSL/TLS, for example. And SHA-1 is stronger than the older MD5 digest algorithm, another digest option defined in SSL/TLS. Optimizations Review SHA-1 operates by reading an arbitrary amount of data. The data is read in 512 bit (64 byte) blocks (the last block is padded in a specific way to ensure it's a full 64 bytes). Each 64 byte block has 80 "rounds" of calculations (consisting of a mixture of "ROTATE-LEFT", "AND", and "XOR") applied to the block. Each round produces a 32-bit intermediate result, called W[i]. Here's what each round operates: The first 16 rounds, rounds 0 to 15, read the 512 bit block 32 bits at-a-time. These 32 bits is used as input to the round. The remaining rounds, rounds 16 to 79, use the results from the previous rounds as input. Specifically for round i it XORs the results of rounds i-3, i-8, i-14, and i-16 and rotates the result left 1 bit. The remaining calculations for the round is a series of AND, XOR, and ROTATE-LEFT operators on the 32-bit input and some constants. The 32-bit result is saved as W[i] for round i. The 32-bit result of the final round, W[79], is the SHA-1 checksum. Optimization: Vectorization The first 16 rounds can be vectorized (computed in parallel) because they don't depend on the output of a previous round. As for the remaining rounds, because of step 2 above, computing round i depends on the results of round i-3, W[i-3], one can vectorize 3 rounds at-a-time. Max Locktyukhin found through simple factoring, explained in detail in his article referenced below, that the dependencies of round i on the results of rounds i-3, i-8, i-14, and i-16 can be replaced instead with dependencies on the results of rounds i-6, i-16, i-28, and i-32. That is, instead of initializing intermediate result W[i] with: W[i] = (W[i-3] XOR W[i-8] XOR W[i-14] XOR W[i-16]) ROTATE-LEFT 1 Initialize W[i] as follows: W[i] = (W[i-6] XOR W[i-16] XOR W[i-28] XOR W[i-32]) ROTATE-LEFT 2 That means that 6 rounds could be vectorized at once, with no additional calculations, instead of just 3! This optimization is independent of Intel or any other microprocessor architecture, although the microprocessor has to support vectorization to use it, and exploits one of the weaknesses of SHA-1. Optimization: SSSE3 Intel SSSE3 makes use of 16 %xmm registers, each 128 bits wide. The 4 32-bit inputs to a round, W[i-6], W[i-16], W[i-28], W[i-32], all fit in one %xmm register. The following code snippet, from Max Locktyukhin's article, converted to ATT assembly syntax, computes 4 rounds in parallel with just a dozen or so SSSE3 instructions: movdqa W_minus_04, W_TMP pxor W_minus_28, W // W equals W[i-32:i-29] before XOR // W = W[i-32:i-29] ^ W[i-28:i-25] palignr $8, W_minus_08, W_TMP // W_TMP = W[i-6:i-3], combined from // W[i-4:i-1] and W[i-8:i-5] vectors pxor W_minus_16, W // W = (W[i-32:i-29] ^ W[i-28:i-25]) ^ W[i-16:i-13] pxor W_TMP, W // W = (W[i-32:i-29] ^ W[i-28:i-25] ^ W[i-16:i-13]) ^ W[i-6:i-3]) movdqa W, W_TMP // 4 dwords in W are rotated left by 2 psrld $30, W // rotate left by 2 W = (W >> 30) | (W << 2) pslld $2, W_TMP por W, W_TMP movdqa W_TMP, W // four new W values W[i:i+3] are now calculated paddd (K_XMM), W_TMP // adding 4 current round's values of K movdqa W_TMP, (WK(i)) // storing for downstream GPR instructions to read A window of the 32 previous results, W[i-1] to W[i-32] is saved in memory on the stack. This is best illustrated with a chart. Without vectorization, computing the rounds is like this (each "R" represents 1 round of SHA-1 computation): RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR With vectorization, 4 rounds can be computed in parallel: RRRRRRRRRRRRRRRRRRRR RRRRRRRRRRRRRRRRRRRR RRRRRRRRRRRRRRRRRRRR RRRRRRRRRRRRRRRRRRRR Optimization: AVX The new "Sandy Bridge" microprocessor architecture, which supports AVX, allows another interesting optimization. SSSE3 instructions have two operands, a input and an output. AVX allows three operands, two inputs and an output. In many cases two SSSE3 instructions can be combined into one AVX instruction. The difference is best illustrated with an example. Consider these two instructions from the snippet above: pxor W_minus_16, W // W = (W[i-32:i-29] ^ W[i-28:i-25]) ^ W[i-16:i-13] pxor W_TMP, W // W = (W[i-32:i-29] ^ W[i-28:i-25] ^ W[i-16:i-13]) ^ W[i-6:i-3]) With AVX they can be combined in one instruction: vpxor W_minus_16, W, W_TMP // W = (W[i-32:i-29] ^ W[i-28:i-25] ^ W[i-16:i-13]) ^ W[i-6:i-3]) This optimization is also in Solaris, although Sandy Bridge-based systems aren't widely available yet. As an exercise for the reader, AVX also has 256-bit media registers, %ymm0 - %ymm15 (a superset of 128-bit %xmm0 - %xmm15). Can %ymm registers be used to parallelize the code even more? Optimization: Solaris-specific In addition to using the Intel code described above, I performed other minor optimizations to the Solaris SHA-1 code: Increased the digest(1) and mac(1) command's buffer size from 4K to 64K, as previously done for decrypt(1) and encrypt(1). This size is well suited for ZFS file systems, but helps for other file systems as well. Optimized encode functions, which byte swap the input and output data, to copy/byte-swap 4 or 8 bytes at-a-time instead of 1 byte-at-a-time. Enhanced the Solaris mdb(1) and kmdb(1) debuggers to display all 16 %xmm and %ymm registers (mdb "$x" command). Previously they only displayed the first 8 that are available in 32-bit mode. Can't optimize if you can't debug :-). Changed the SHA-1 code to allow processing in "chunks" greater than 2 Gigabytes (64-bits) Performance I measured performance on a Sun Ultra 27 (which has a Nehalem-class Xeon 5500 Intel W3570 microprocessor @3.2GHz). Turbo mode is disabled for consistent performance measurement. Graphs are better than words and numbers, so here they are: The first graph shows the Solaris digest(1) command before and after the optimizations discussed here, contained in libmd(3LIB). I ran the digest command on a half GByte file in swapfs (/tmp) and execution time decreased from 1.35 seconds to 0.98 seconds. The second graph shows the the results of an internal microbenchmark that uses the Solaris libpkcs11(3LIB) library. The operations are on a 128 byte buffer with 10,000 iterations. The results show operations increased from 320,000 to 416,000 operations per second. Finally the third graph shows the results of an internal kernel microbenchmark that uses the Solaris /kernel/crypto/amd64/sha1 module. The operations are on a 64Kbyte buffer with 100 iterations. third graph shows the results of an internal kernel microbenchmark that uses the Solaris /kernel/crypto/amd64/sha1 module. The operations are on a 64Kbyte buffer with 100 iterations. The results show for 1 kernel thread, operations increased from 410 to 600 MBytes/second. For 8 kernel threads, operations increase from 1540 to 1940 MBytes/second. Availability This code is in Solaris 11 FCS. It is available in the 64-bit libmd(3LIB) library for 64-bit programs and is in the Solaris kernel. You must be running hardware that supports Intel's SSSE3 instructions (for example, Intel Nehalem, Westmere, or Sandy Bridge microprocessor architectures). The easiest way to determine if SSSE3 is available is with the isainfo(1) command. For example, nehalem $ isainfo -v $ isainfo -v 64-bit amd64 applications sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov amd_sysc cx8 tsc fpu 32-bit i386 applications sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov sep cx8 tsc fpu If the output also shows "avx", the Solaris executes the even-more optimized 3-operand AVX instructions for SHA-1 mentioned above: sandybridge $ isainfo -v 64-bit amd64 applications avx xsave pclmulqdq aes sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov amd_sysc cx8 tsc fpu 32-bit i386 applications avx xsave pclmulqdq aes sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov sep cx8 tsc fpu No special configuration or setup is needed to take advantage of this code. Solaris libraries and kernel automatically determine if it's running on SSSE3 or AVX-capable machines and execute the correctly-tuned code for that microprocessor. Summary The Solaris 11 Crypto Framework, via the sha1 kernel module and libmd(3LIB) and libpkcs11(3LIB) libraries, incorporated a useful SHA-1 optimization from Intel for SSSE3-capable microprocessors. As with other Solaris optimizations, they come automatically "under the hood" with the current Solaris release. References "Improving the Performance of the Secure Hash Algorithm (SHA-1)" by Max Locktyukhin (Intel, March 2010). The source for these SHA-1 optimizations used in Solaris "SHA-1", Wikipedia Good overview of SHA-1 FIPS 180-1 SHA-1 standard (FIPS, 1995) NIST Comments on Cryptanalytic Attacks on SHA-1 (2005, revised 2006)

    Read the article

  • links for 2010-03-11

    - by Bob Rhubart
    Andy Mulholland: (Information Technology) + (Business Technology) ÷ Clouds = Infostructure "Internal information technology with its dedicated users, applications, licenses, client-server, data-centric and close coupled integration architecture cannot support externally oriented business technology where almost every condition is different. Internet connectivity and the emergence of people centric services in the web 2.0 world has led business and user expectations to shift dramatically and give rise to the expectation of a new and completely different working environment, based in the cloud, or more correctly, clouds." -- Andy Mulholland, CTO Blog, Capgemini (tags: enterprisearchitecture cloud web2.0 entarch) @myfear: Getting started with (GSW #2): GlassFish v3 "If the application server/container of your choice is a Java EE compliant one, you are on the right track. This list is not too long these days, if you look for Java EE 6 compliant servers. The most prominent and well-known is also the Java EE 6 reference implementation (RI): The Oracle GlassFish v3." -- Oracle ACE Markus "@myfear" Eisele (tags: oracle otn oracleace glassfish java) @oraclenerd: The"Database is a Bucket" Mentality "Could it be that everyone out there believes that the sole purpose of a database is to store data? That it can't do anything else?" -- Chet "@oraclenerd" Justice (tags: otn oracle database dba) The Encyclopedia of SOA "SOA is an anagram for OSA, which means female bear in spanish. It is a well-known fact in the spanish-speaking world that female bears are able to model business processes and optimize reusable IT assets better than any other hibernating animal." -- One of the surprisingly funny nuggets of wisdom available in the Encyclopedia of SOA. (tags: architecture chucknorris humor soa software technology webservices) Marina Fisher: Book Review - Web 2.0 Fundamentals Marina Fisher reviews WEB 2.0 FUNDAMENTALS by Oswald Campesato and Kevin Nilson. (tags: sun web2.0 bookreview socialnetworking)

    Read the article

  • Tuning Red Gate: #5 of Multiple

    - by Grant Fritchey
    In the Tuning Red Gate series I've shown you how to look at a current load on the system and how to drill down to look at historical analysis of the system. I've also shown how you can see the top queries and other information from the current status of the system. I have one more thing I can show you before we need to start fixing things and showing how that affects the data collected, historical moments in time. For example, back in Post #3 I was looking at some spikes in some of the monitored resources that were taking place a couple of weeks back in time. Once I identify a moment in time that I'm interested in, I can go back to the first page of Monitor, Global Overview, and click on the icon: From this you can select the date and time you're interested in. For example, I saw some serious CPU queues last week: This then rolls back the time for all the information that's available to the Global Overview and the drill down to the server and the SQL Server instance there. This then allows me to look at the Top Queries running at this point, sort them by CPU and identify what was potentially the query that was causing the problem right when I saw the CPU queuing This ability to correlate a moment in time with the information available to you in the Analysis window makes for an excellent tool to investigate your systems going backwards in time. It really makes a huge difference in your knowledge. It's not enough to know that something happened at a particular time. You need to know what it was that was occurring. Remember, the key to tuning your systems is having enough knowledge about them. I'll post more on Tuning Red Gate as soon as I can get some queries rewritten. I'm working on that.

    Read the article

  • The Eight Most-Important EBS Techstack Stories in 2010

    - by Steven Chan
    I've never really understood the custom of stuffing a summary of one's family's activities for the year in a Christmas/Hanukkah/Kwanzaa card.  It seems a little self-congratulatory and impersonal.  I'd rather my friends kept authentically in touch throughout the year, but perhaps that's just me.Nonetheless, I see the value of a year-end summary in the IT industry.  I spend a lot of time helping our customers understand the latest new developments... and straightening out confusion over changes to the old and familiar.  It can be hard to keep up with the latest news in this space.Here are the eight most-important news items for 2010, with suggested actions for Apps DBAs:Premier Support for EBS 11.5.10 ended on November 30, 2010You need to be on a minimum baseline of 11.5.10 patches to be eligible for Extended Support.  New patches for EBS 11i released during the Extended Support period will be produced only for the minimum baseline configuration.Action: Ensure that your EBS 11i environments meet the minimum baseline requirements. Minimum Baselines are Emerging for EBS 12.0 Extended SupportExtended Support for EBS 12.0 begins on February 1, 2012.  That's only 13 months away.  Minimum baselines haven't been finalized yet, but the 12.0.6 Release Update Pack and the Financials CPC July 2009 are currently slated.  Action: Ensure that your EBS 12.0 environments meet the currently-specified baseline requirements. Sun, Windows, and Linux users should have upgraded to JDK 6 by nowJDK 5's End of Service Life was October 30, 2009 for those three platforms.  If you're running the E-Business Suite on Sun, Windows, or Linux, you should upgrade your EBS servers to JDK 6.  Alternatively, you can purchase Java for Business support (the equivalent of Extended Support for Java). Action: Upgrade your Sun, Windows, or Linux EBS servers to JDK 6. Premier Support for Database 10gR2 ended on July 31, 2010The 10gR2 Database is now in Extended Support.  If you're still on 10gR2, you should start planning your upgrade to a higher certified database version such as 11gR2 11.2.0.2.Action: Upgrade to 10gR2 databases to 11gR2 11.2.0.2. 

    Read the article

  • Data Integration 12c Raising the Big Data Roof at Oracle OpenWorld

    - by Tanu Sood
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Times New Roman","serif"; mso-fareast-font-family:"MS Mincho";} Author: Dain Hansen, Director, Oracle It was an exciting OpenWorld 2013 for us in the Data Integration track. Our theme this year was all about ‘being future ready’ - previewing one of our biggest releases this year: Oracle Data Integration 12c. Just this week we followed up with this preview by announcing the general availability of 12c release for Oracle’s key data integration products: Oracle Data Integrator 12c and Oracle GoldenGate 12c. The new release delivers extreme performance, increase IT productivity, and simplify deployment, while helping IT organizations to keep pace with new data-oriented technology trends including cloud computing, big data analytics, real-time business intelligence. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Times New Roman","serif"; mso-fareast-font-family:"MS Mincho";} Mark Hurd's keynote on day one set the tone for the Data Integration sessions. Mark focused on big data analytics and the changing consumer expectations. Especially real-time insight is a key theme for Oracle overall and data integration products. In Mark Hurd's keynote we heard from key customers, such as Airbus and Thomson Reuters, how real-time analysis of operational data including machine data creates value, in some cases even saves lives. Thomas Kurian gave a deeper look into Oracle's big data and fast data solutions. In the initial lead Data Integration track session - Brad Adelberg, VP of Development, presented Oracle’s Data Integration 12c product strategy based on key trends from the initial OpenWorld keynotes. Brad talked about how Oracle's data integration products address the new data integration requirements that evolved with cloud computing, big data, and changing consumer expectations and how they set the key themes in our products’ road map. Brad explained why and how fast-time to value, high-performance and future-ready solutions is the top focus areas for product development. If you were not able to attend OpenWorld or this session I recommend reading the white paper: Five New Data Integration Requirements and How to Meet them with Oracle Data Integration, which provides an in-depth look into how Oracle addresses the new trends in the DI market. Following Brad’s session, Nick Wagner provided in depth review of Oracle GoldenGate’s latest features and roadmap. Nick discussed how Oracle GoldenGate’s tight integration with Oracle Database sets the product apart from the competition. We also heard that heterogeneity of the product is still a major focus for GoldenGate’s development and there will be more news on that front when there is a major release. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Times New Roman","serif"; mso-fareast-font-family:"MS Mincho";} After GoldenGate’s product strategy session, Denis Gray from the PM team presented Oracle Data Integrator’s product strategy session, talking about the latest and greatest on ODI. Another good session was delivered by long-time GoldenGate users, Comcast.  Jason Hurd and Amit Patel of Comcast talked about the various use cases they deploy Oracle GoldenGate throughout their enterprise, from database upgrades, feeding reporting systems, to active-active database synchronization.  The Comcast team shared many good tips on how to use GoldenGate for both zero downtime upgrades and active-active replication with conflict management requirement. One of our other important goals we had this year for the Data Integration track at OpenWorld was hearing from our customers. We ended day 1 on just that, with a wonderful award ceremony for Oracle Excellence Awards for Oracle Fusion Middleware Innovation. The ceremony was held in the Yerba Buena Center for the Arts. Congratulations to Royal Bank of Scotland and Yalumba Wine Company, the winners in the Data Integration category. You can find more information on the award and the winners in our previous blog post: 2013 Oracle Excellence Awards for Fusion Middleware Innovation… Selected for their innovation use of Oracle’s Data Integration products; the winners for the Data Integration Category are Royal Bank of Scotland and The Yalumba Wine Company. Congratulations!!! Royal Bank of Scotland’s Market and International Banking division provides clients across the globe with seamless trading and competitive pricing, underpinned by a deep knowledge of risk management across the full spectrum of financial products. They handle millions of transactions daily to keep the lifeblood of their clients’ businesses flowing – whether through payment management solutions or through bespoke trade finance solutions. Royal Bank of Scotland is leveraging Oracle GoldenGate and Oracle Data Integrator along with Oracle Business Intelligence Enterprise Edition and the Oracle Database for a variety of solutions. Mainly, Oracle GoldenGate and Oracle Data Integrator are used to feed their data warehouse – providing a real-time data integration solution that feeds transactional data to their analytics system in minutes to enable improved decision making with timely, accurate data for their business users. Oracle Data Integrator’s in-database transformation capabilities and its ability to integrate with Oracle GoldenGate for real-time data capture is the foundation of this implementation. This solution makes it such that changes happening in the analytics systems are available the same day they are deployed on the operational system with 100% data quality guaranteed. Additionally, the solution has helped to reduce their operational database size from 150GB to 10GB. Impressive! Now what if I told you this solution was built in 3 months and had a less than 6 month return on investment? That’s outstanding! The Yalumba Wine Company is situated in the Barossa Valley of Australia. It is the oldest family owned winery in Australia with a unique way of aging their wines in specially crafted 100 liter barrels. Did you know that “Yalumba” is Aboriginal for “all the land around”? The Yalumba Wine Company is growing rapidly, and was in need of introducing a more modern standard to the existing manufacturing processes to meet globalization demands, overall time-to-market, and better operational efficiency objectives of product development. The Yalumba Wine Company worked with a partner, Bristlecone to develop a unique solution whereby Oracle Data Integrator is leveraged to pull data from Salesforce.com and JD Edwards, in addition to their other pre-existing source systems, for consumption into their data warehouse. They have emphasized the overall ease of developing integration workflows with Oracle Data Integrator. The solution has brought better visibility for the business users, shorter data loading and transformation performance to their data warehouse with rapid incorporation of new data sources, and a solid future-proof foundation for their organization. Moving forward, they plan on leveraging more from Oracle’s Data Integration portfolio. Terrific! In addition to these two customers on Tuesday we featured many other important Oracle Data Integrator and Oracle GoldenGate customers. On Tuesday the GoldenGate panel included: Land O’Lakes, Smuckers, and Veolia Water. Besides giving us yummy nutrition and healthy water, these companies have another aspect in common. They all use GoldenGate to boost their ERP application. Please read the recap by Irem Radzik. On Wednesday, the ODI Panel included: Barry Ralston and Ryan Weber of Infinity Insurance, Paul Stracke of Paychex Inc., and Ian Wall of Vertex Pharmaceuticals for a session filled with interesting projects, use cases and approaches to leveraging Oracle Data Integrator. Please read the recap by Sandrine Riley for more. Thanks to everyone who joined with us and we hope to stay connected! To hear more about our Data Integration12c products join us in an upcoming webcast to learn more. Follow us www.twitter.com/ORCLGoldenGate or goto our website at www.oracle.com/goto/dataintegration

    Read the article

  • Is this a secure solution for RESTful authentication?

    - by Chad Johnson
    I need to quickly implement a RESTful authentication system for my JavaScript application to use. I think I understand how it should work, but I just want to double check. Here's what I'm thinking -- what do you guys think? Database schema users id : integer first_name : varchar(50) last_name : varchar(50) password : varchar(32) (MD5 hashed) etc. user_authentications id : integer user_id : integer auth_token : varchar(32) (AES encrypted, with keys outside database) access_token : varchar(32) (AES encrypted, with keys outside database) active : boolean Steps The following happens over SSL. I'm using Sinatra for the API. JavaScript requests authentication via POST to /users/auth/token. The /users/auth/token API method generates an auth_token hash, creates a record in user_authentications, and returns auth_token. JavaScript hashes the user's password and then salts it with auth_token -- SHA(access_token + MD5(password)) POST the user's username and hashed+salted password to /users/auth/authenticate. The /users/auth/authenticate API method will verify that SHA(AES.decrypt(access_token) + user.password) == what was received via POST. The /users/auth/authenticate will generate, AES encrypt, store, and return an access token if verification is successful; otherwise, it will return 401 Unauthorized. For any future requests against the API, JavaScript will include access_token, and the API will find the user account based on that.

    Read the article

  • Software Engineering Practices &ndash; Different Projects should have different maturity levels

    - by Dylan Smith
    I’ve had a lot of discussions at the office lately about the drastically different sets of software engineering practices used on our various projects, if what we are doing is appropriate, and what factors should you be considering when determining what practices are most appropriate in a given context. I wanted to write up my thoughts in a little more detail on this subject, so here we go: If you compare any two software projects (specifically comparing their codebases) you’ll often see very different levels of maturity in the software engineering practices employed. By software engineering practices, I’m specifically referring to the quality of the code and the amount of technical debt present in the project. Things such as Test Driven Development, Domain Driven Design, Behavior Driven Development, proper adherence to the SOLID principles, etc. are all practices that you would expect at the mature end of the spectrum. At the other end of the spectrum would be the quick-and-dirty solutions that are done using something like an Access Database, Excel Spreadsheet, or maybe some quick “drag-and-drop coding”. For this blog post I’m going to refer to this as the Software Engineering Maturity Spectrum (SEMS). I believe there is a time and a place for projects at every part of that SEMS. The risks and costs associated with under-engineering solutions have been written about a million times over so I won’t bother going into them again here, but there are also (unnecessary) costs with over-engineering a solution. Sometimes putting multiple layers, and IoC containers, and abstracting out the persistence, etc is complete overkill if a one-time use Access database could solve the problem perfectly well. A lot of software developers I talk to seem to automatically jump to the very right-hand side of this SEMS in everything they do. A common rationalization I hear is that it may seem like a small trivial application today, but these things always grow and stick around for many years, then you’re stuck maintaining a big ball of mud. I think this is a cop-out. Sure you can’t always anticipate how an application will be used or grow over its lifetime (can you ever??), but that doesn’t mean you can’t manage it and evolve the underlying software architecture as necessary (even if that means having to toss the code out and re-write it at some point…maybe even multiple times). My thoughts are that we should be making a conscious decision around the start of each project approximately where on the SEMS we want the project to exist. I believe this decision should be based on 3 factors: 1. Importance - How important to the business is this application? What is the impact if the application were to suddenly stop working? 2. Complexity - How complex is the application functionality? 3. Life-Expectancy - How long is this application expected to be in use? Is this a one-time use application, does it fill a short-term need, or is it more strategic and is expected to be in-use for many years to come? Of course this isn’t an exact science. You can’t say that Project X should be at the 73% mark on the SEMS and expect that to be helpful. My point is not that you need to precisely figure out what point on the SEMS the project should be at then translate that into some prescriptive set of practices and techniques you should be using. Rather my point is that we need to be aware that there is a spectrum, and that not everything is going to be (or should be) at the edges of that spectrum, indeed a large number of projects should probably fall somewhere within the middle; and different projects should adopt a different level of software engineering practices and maturity levels based on the needs of that project. To give an example of this way of thinking from my day job: Every couple of years my company plans and hosts a large event where ~400 of our customers all fly in to one location for a multi-day event with various activities. We have some staff whose job it is to organize the logistics of this event, which includes tracking which flights everybody is booked on, arranging for transportation to/from airports, arranging for hotel rooms, name tags, etc The last time we arranged this event all these various pieces of data were tracked in separate spreadsheets and reconciliation and cross-referencing of all the data was literally done by hand using printed copies of the spreadsheets and several people sitting around a table going down each list row by row. Obviously there is some room for improvement in how we are using software to manage the event’s logistics. The next time this event occurs we plan to provide the event planning staff with a more intelligent tool (either an Excel spreadsheet or probably an Access database) that can track all the information in one location and make sure that the various pieces of data are properly linked together (so for example if a person cancels you only need to delete them from one place, and not a dozen separate lists). This solution would fall at or near the very left end of the SEMS meaning that we will just quickly create something with very little attention paid to using mature software engineering practices. If we examine this project against the 3 criteria I listed above for determining it’s place within the SEMS we can see why: Importance – If this application were to stop working the business doesn’t grind to a halt, revenue doesn’t stop, and in fact our customers wouldn’t even notice since it isn’t a customer facing application. The impact would simply be more work for our event planning staff as they revert back to the previous way of doing things (assuming we don’t have any data loss). Complexity – The use cases for this project are pretty straightforward. It simply needs to manage several lists of data, and link them together appropriately. Precisely the task that access (and/or Excel) can do with minimal custom development required. Life-Expectancy – For this specific project we’re only planning to create something to be used for the one event (we only hold these events every 2 years). If it works well this may change (see below). Let’s assume we hack something out quickly and it works great when we plan the next event. We may decide that we want to make some tweaks to the tool and adopt it for planning all future events of this nature. In that case we should examine where the current application is on the SEMS, and make a conscious decision whether something needs to be done to move it further to the right based on the new objectives and goals for this application. This may mean scrapping the access database and re-writing it as an actual web or windows application. In this case, the life-expectancy changed, but let’s assume the importance and complexity didn’t change all that much. We can still probably get away with not adopting a lot of the so-called “best practices”. For example, we can probably still use some of the RAD tooling available and might have an Autonomous View style design that connects directly to the database and binds to typed datasets (we might even choose to simply leave it as an access database and continue using it; this is a decision that needs to be made on a case-by-case basis). At Anvil Digital we have aspirations to become a primarily product-based company. So let’s say we use this tool to plan a handful of events internally, and everybody loves it. Maybe a couple years down the road we decide we want to package the tool up and sell it as a product to some of our customers. In this case the project objectives/goals change quite drastically. Now the tool becomes a source of revenue, and the impact of it suddenly stopping working is significantly less acceptable. Also as we hold focus groups, and gather feedback from customers and potential customers there’s a pretty good chance the feature-set and complexity will have to grow considerably from when we were using it only internally for planning a small handful of events for one company. In this fictional scenario I would expect the target on the SEMS to jump to the far right. Depending on how we implemented the previous release we may be able to refactor and evolve the existing codebase to introduce a more layered architecture, a robust set of automated tests, introduce a proper ORM and IoC container, etc. More likely in this example the jump along the SEMS would be so large we’d probably end up scrapping the current code and re-writing. Although, if it was a slow phased roll-out to only a handful of customers, where we collected feedback, made some tweaks, and then rolled out to a couple more customers, we may be able to slowly refactor and evolve the code over time rather than tossing it out and starting from scratch. The key point I’m trying to get across is not that you should be throwing out your code and starting from scratch all the time. But rather that you should be aware of when and how the context and objectives around a project changes and periodically re-assess where the project currently falls on the SEMS and whether that needs to be adjusted based on changing needs. Note: There is also the idea of “spectrum decay”. Since our industry is rapidly evolving, what we currently accept as mature software engineering practices (the right end of the SEMS) probably won’t be the same 3 years from now. If you have a project that you were to assess at somewhere around the 80% mark on the SEMS today, but don’t touch the code for 3 years and come back and re-assess its position, it will almost certainly have changed since the right end of the SEMS will have moved farther out (maybe the project is now only around 60% due to decay). Developer Skills Another important aspect to this whole discussion is around the skill sets of your architects and lead developers. When talking about the progression of a developers skills from junior->intermediate->senior->… they generally start by only being able to write code that belongs on the left side of the SEMS and as they gain more knowledge and skill they become capable of working at a higher and higher level along the SEMS. We all realize that the learning never stops, but eventually you’ll get to the point where you can comfortably develop at the right-end of the SEMS (the exact practices and techniques that translates to is constantly changing, but that’s not the point here). A critical skill that I’d love to see more evidence of in our industry is the most senior guys not only being able to work at the right-end of the SEMS, but more importantly be able to consciously work at any point along the SEMS as project needs dictate. An even more valuable skill would be if you could make the conscious decision to move a projects code further right on the SEMS (based on changing needs) and do so in an incremental manner without having to start from scratch. An exercise that I’m planning to go through with all of our projects here at Anvil in the near future is to map out where I believe each project currently falls within this SEMS, where I believe the project *should* be on the SEMS based on the business needs, and for those that don’t match up (i.e. most of them) come up with a plan to improve the situation.

    Read the article

  • Move a SQL Azure server between subscriptions

    - by jamiet
    In September 2011 I published a blog post SSIS Reporting Pack v0.2 now available in which I made available the credentials of a sample database that one could use to test SSIS Reporting Pack. That database was sitting on a paid-for Azure subscription and hence was costing me about £5 a month - not a huge amount but when I later got a free Azure subscription through my MSDN Subscription in January 2012 it made sense to migrate the database onto that subscription. Since then I have been endeavouring to make that move but a few failed attempts combined with lack of time meant that I had not yet gotten round to it.That is until this morning when I heard about a new feature available in the Azure Management Portal that enables one to move a SQL Azure server from one subscription to another. Up to now I had been attempting to use a combination of SSIS packages and/or scripts to move the data but, as I alluded, I ran into a few roadblocks hence the ability to move a SQL Azure server was a godsend to me. I fired up the Azure Management Portal and a few clicks later my server had been successfully migrated, moreover the name of the server doesn't change and neither do any credentials so I have no need to go and update my original blog post either. Its easy to be cynical about SQL Azure (and I maintain a healthy scepticism myself) but that, my friends, is cool!You can read more about the ability to move SQL Azure servers between subscriptions from the official blog post Moving SQL Azure Servers Between Subscriptions.@Jamiet

    Read the article

  • Virtualization in Solaris 11 Express

    - by lynn.rohrer(at)oracle.com
    In Oracle Solaris 10 we introduced Oracle Solaris Containers -- lightweight virtual application environments that allow you to consolidate your Oracle Solaris applications onto a single Oracle Solaris server and make the most of your system resources.The majority of our customers are now using Oracle Solaris Containers on their enterprise systems for applications ranging from web servers to Oracle Database installations. We can also make these Containers highly available with Oracle Solaris Cluster, the industry's first virtualization-aware enterprise cluster product. Using Oracle Solaris Cluster you can failover applications in a Container to another Container on a single system or across systems for additional availability.We've added significant features in Oracle Solaris 11 Express to improve and extend the Oracle Solaris Zone model:Integration of Zones with our new Solaris 11 packaging system (aka Image Packaging System) to provide easy software updates within a zoneSupport for Oracle Solaris 10 Zones to run your Solaris 10 applications unaltered on an Oracle Solaris 11 Express systemIntegration with the new Oracle Solaris 11 network stack architecture (more on this in a future blog post)Improved observability with the zonestat management interface and commandsDelegated administration rights for owners of individual non-global zonesTight integration with Oracle Solaris ZFS to allow dedicated datasets per zoneWith ZFS as the default file system we can now provide easy to manage Boot Environments for zonesThis quick summary is just to whet your appetite to learn more about Oracle Solaris 11 Express Zones enhancements. Fortunately we can serve a full meal at the Oracle Solaris 11 Express Technology Spotlight on Virtualization page on the Oracle Technical Network.

    Read the article

  • A new version of Oracle Enterprise Manager Ops Center Doctor (OCDoctor ) Utility released

    - by Anand Akela
    In February,  we posted a blog of Oracle Enterprise Manager Ops Center Doctor aka OCDoctor Utility. This utility assists in various stages of the Ops Center deployment and can be a real life saver. It is updated on a regular basis with additional knowledge (similar to an antivirus subscription) to help you identify and resolve known issues or suggest ways to improve performance.A new version ( Version 4.00 ) of the OCDoctor is now available . This new version adds full support for recently announced Oracle Enterprise Manager Ops Center 12c including prerequisites checks, troubleshoot tests, log collection, tuning and product metadata updates. In addition, it adds several bug fixes and enhancements to OCDoctor Utility.To download OCDoctor for new installations:https://updates.oracle.com/OCDoctor/OCDoctor-latest.zipFor existing installations, simply run:# /var/opt/sun/xvm/OCDoctor/OCDoctor.sh --updateTip : If you have Oracle Enterprise Manager Ops Center12c EC installed, your OCDoctor will automatically update overnight. Join Oracle Launch Webcast : Total Cloud Control for Systems on April 12th at 9 AM PST to learn more about  Oracle Enterprise Manager Ops Center 12c from Oracle Senior Vice President John Fowler, Oracle Vice President of Systems Management Steve Wilson and a panel of Oracle executive. Stay connected with  Oracle Enterprise Manager   :  Twitter | Facebook | YouTube | Linkedin | Newsletter

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 18 (sys.dm_io_virtual_file_stats)

    - by Tamarick Hill
    The sys.dm_io_virtual_file_stats Dynamic Management Function is used to return IO statistic information about each of your database files on your server. As input parameters, this function takes a database_id and a file_id. If you want to return IO statistic information for all files, you can simply pass in NULL values for both of these. Let’s have a look at this function  and examine its results: SELECT db_name(database_id) DatabaseName, * FROM sys.dm_io_virtual_file_stats(NULL, NULL) The first column in the result set is the DatabaseName which is just a column I created using the db_name() system function and the database_id column from this function. Next we have a file_id which represent the ID for the file, whether it be a data file or transaction log file. The ‘sample_ms’ column represents the total time in milliseconds that the instance has been up and running. Next we have the ‘num_of_reads’, ‘num_of_bytes_read’, and later ‘num_of_writes’, and ‘num_of_bytes_written’. These columns represent the number of reads or writes and number of bytes read or written against a particular file. These columns are beneficial when determining how often a particular file is being accessed. The ‘io_stall_read_ms’ and io_stall_write_ms’ columns each represent the the total time in milliseconds that users have had to wait for reads or writes against a file respectively. The ‘io_stall’ column is the sum of both read and write io stalls. The ‘size_on_disk_bytes’ column represents the size of the respective file on your disk subsystem. Lastly the ‘file_handle’ column is simply the Windows File handle. This Dynamic Management Function is useful when you are needing to analyze your database files for the purposes of segregating high IO databases. This DMF gives you a good view of which of your database files are being accessed the most and which ones may be generating the largest IO stalls. These could be your best candidates for moving into separate IO channels. For more information about this DMF, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms190326.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • Improving the performance of a db import process

    - by mmr
    I have a program in Microsoft Access that processes text and also inserts data in MySQL database. This operation takes 30 mins or less to finished. I translated it into VB.NET and it takes 2 hours to finish. The program goes like this: A text file contains individual swipe from a corresponding person, it contains their id, time and date of swipe in the machine, and an indicator if it is a time-in or a time-out. I process this text, segregate the information and insert the time-in and time-out per row. I also check if there are double occurrences in the database. After checking, I simply merge the time-in and time-out of the corresponding person into one row only. This process takes 2 hours to finished in VB.NET considering I have a table to compare which contains 600,000+ rows. Now, I read in the internet that python is best in text processing, i already have a test but i doubt in database operation. What do you think is the best programming language for this kind of problem? How can I speed up the process? My first idea was using python instead of VB.NET, but since people here telling me here on SO that this most probably won't help I am searching for different solutions.

    Read the article

  • VISIT ORACLE LINUX PAVILION @ORACLE OPENWORLD

    - by Zeynep Koch
    Back by popular demand, Oracle will again host the Oracle Linux Pavilionat Oracle OpenWorld from October 1-3. The pavilion will be located in the Exhibition Hall at Moscone South, Booth 1033, next to the Oracle DEMOgrounds and Oracle Linux demopods. At the pavilion a select group of ISVs, IHVs, and SIs will showcase their products that have been Oracle Linux- and/or Oracle VM-certified. These certified products enable customer applications to run faster, thereby saving money.Partners exhibiting their solutions in the Oracle Linux Pavilion include: BeyondTrust: context-aware security intelligence for dynamic IT infrastructures such as cloud, mobile, and virtual technologies Centrify: control, secure, and audit access to cross-platform systems, mobile devices, and applications Data Intensity: cloud services and application management Fujitsu: technology platforms, private cloud, services, ubiquitous and device solutions HP: converged cloud, converged infrastructure, application transformation, and information optimization LSI: intelligent solid-state storage solutions for breakthrough database acceleration Mellanox: InfiniBand and Ethernet end-to-end server and storage interconnect solutions and services for data centers Micro Focus: mainframe solutions, application modernization and development tools, software quality tools NetApp: storage and data management QLogic: high performance networking Teleran: BI and data warehouse management solutions for Oracle Exadata Database Machine and Oracle Database Be sure to pick up your free Oracle Linux and Oracle VM DVD Kit if you visit one of these partners. And speaking of free, be sure to stop by for some cool treats, courtesy of sponsor QLogic: Smoothie Bar on Monday, October 1 from 2:30 p.m. - 5:30p.m. Ice Cream Social on Wednesday, October 3 from 1:00 p.m. - 2:00 p.m. We look forward to seeing you at the pavilion.

    Read the article

  • Partner Webcast - Is your Application Ready? Prove it with the Oracle Exastack Program

    - by Thanos
    At Oracle we design Engineered Systems that are pre-integrated to reduce the cost and complexity of IT infrastructures while increasing productivity and performance. Oracle innovates and optimizes performance at every IT layer to simplify business operations, drive down costs and accelerate business innovation.As the Engineered System foundation platform, Oracle Exadata and Oracle Exalogic, run all of Oracle Cloud's services across a range of global data centers, delivering extreme performance, massive scalability, and fault tolerance that has no single point of failure.The Oracle Exastack Program enables you as an ISV to leverage Oracle's scalable, integrated infrastructure to test, tune and optimize your applications for high performance. By getting Exastack Ready and Exastack Optimized, your applications get formal recognition from Oracle and additional visibility, while you as an ISV receive additional set of OPN benefits. Don't miss this opportunity to learn more about how you can optimize your applications to run faster and more reliably leveraging Oracle Exastack, but also become more competitive letting everybody know you are ready. Agenda: Oracle Engineered Systems Strategy OPN Exastack Program Benefits & Objectives Value for You Oracle is resourced for your success How to Apply –Demo Next Steps & Useful contacts Delivery FormatThis FREE online LIVE eSeminar will be delivered over the Web. Registrations received less than 24hours prior to start time may not receive confirmation to attend. Thursday 06 December 2012, 10.00 CET (GMT+1) Duration: 1 hour Register Now! " height="6"> For any questions please contact us at [email protected] our ISV Migration Center blog Or Follow us @oracleimc to learn more on Oracle Technologies, upcoming partner webcasts and events. Existing content available YouTube - SlideShare - Oracle Mix

    Read the article

  • WebCenter .NET Accelerator - Microsoft SharePoint Data via WSRP

    - by john.brunswick
    Platforms in the enterprise will never be homogeneous. As much as any vendor would enjoy having their single development or application technology be exclusively adopted by customers, too much legacy, time, education, innovation and vertical business needs exist to make using a single platform practical. JAVA and .NET are the two industry application platform heavyweights and more often than not, business users are leveraging various systems in their day to day activities that incorporate applications developed on top of both platforms. BEA Systems acquired Plumtree Software to complete their "liquid" view of data, stressing that regardless of a particular source system heterogeneous data could interoperate at not only through layers that allowed for data aggregation, but also at the "glass" or UI layer. The technical components that allowed the integration at the glass thrive today at Oracle, helping WebCenter to provide a rich composite application framework. Oracle Ensemble and the Oracle .NET Application Accelerator allow WebCenter to consume and interact with the UI layers provided by .NET applications and a series of other technologies. The beauty of the .NET accelerator is that it can consume any .NET application and act as a Web Services for Remote Portlets (WSRP) producer. I recently had a chance to leverage the .NET accelerator to expose a ASP .NET 2.0 (C#) application in the WebCenter UI (pictured above) and wanted to share a few tips to help others get started with similar integrations. I was using two virtual machines for the exercise - one with Windows Server 2003, running SharePoint and the other running WebCenter Spaces 11g. For my sample application data I ended up using SharePoint 2007 lists and calendars (MOSS 2007) to supply results using a .NET API for SharePoint.

    Read the article

  • What is ADO ?

    - by Aamir Hasan
    What is ADO? ADO is a Microsoft technologyADO stands for ActiveX Data ObjectsADO is a Microsoft Active-X componentADO is automatically installed with Microsoft IISADO is a programming interface to access data in a databaseAccessing a Database from an ASP Page The common way to access a database from inside an ASP page is to: Create an ADO connection to a databaseOpen the database connectionCreate an ADO recordsetOpen the recordsetExtract the data you need from the recordsetClose the recordsetClose the connectionExample  <%set conn=Server.CreateObject("ADODB.Connection")conn.Provider="Microsoft.Jet.OLEDB.4.0"conn.Open(Server.Mappath("/db/northwind.mdb"))set rs = Server.CreateObject("ADODB.recordset")rs.Open "Select * from Customers", conndo until rs.EOF    for each x in rs.Fields       Response.Write(x.name)       Response.Write(" = ")       Response.Write(x.value & "<br />")    next    Response.Write("<br />")    rs.MoveNextlooprs.closeconn.close%> 

    Read the article

  • IoT end-to-end demo – Remote Monitoring and Service By Harish Doddala

    - by JuergenKress
    Historically, data was generated from predictable sources, stored in storage systems and accessed for further processing. This data was correlated, filtered and analyzed to derive insights and/or drive well constructed processes. There was little ambiguity in the kinds of data, the sources it would originate from and the routes that it would follow. Internet of Things (IoT) creates many opportunities to extract value from data that result in significant improvements across industries such as Automotive, Industrial Manufacturing, Smart Utilities, Oil and Gas, High Tech and Professional Services, etc. This demo showcases how the health of remotely deployed machinery can be monitored to illustrate how data coming from devices can be analyzed in real-time, integrated with back-end systems and visualized to initiate action as may be necessary. Use-case: Remote Service and Maintenance Critical machinery once deployed on the field, is expected to work with minimal failures, while delivering high performance and reliability. In typical remote monitoring and industrial automation scenarios, although many physical objects from machinery to equipment may already be “smart and connected,” they are typically operated in a standalone fashion and not integrated into existing business processes. IoT adds an interesting dynamic to remote monitoring in industrial automation solutions in that it allows equipment to be monitored, upgraded, maintained and serviced in ways not possible before. Read the complete article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: IoT,Iot demo,sales,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

< Previous Page | 487 488 489 490 491 492 493 494 495 496 497 498  | Next Page >