Search Results

Search found 590 results on 24 pages for 'dw appliance'.

Page 19/24 | < Previous Page | 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • Why do most routers not include local DNS?

    - by user785194
    I need to change my firewall/router, and I'd prefer something with built-in DNS to resolve queries on the local subnets. I've got a mixed Linux/Windows system, often with only one computer turned on, and I frequently have problems resolving local names. I don't want to keep a Linux box permanently on just for DNS, and I'd prefer to have DNS in my router appliance, which is always on. I search Google for this occasionally but never find anything. You always get the obvious answers - it's not possible, put everything in /etc/hosts, NetBIOS, dedicated box, etc. So what am I missing? Why don't "cheap" routers let you do this? I'm pretty sure that Cisco kit does this. Almost all cheap routers will let you do MAC address reservation, to let them assign static IP addresses for DHCP. So why can't they simply do DNS as well for everything on the local subnets, just passing through remote domains to the ISP?

    Read the article

  • Extend university wifi network [migrated]

    - by asfasdoiuh ouhouhouh
    i live in a university campus and i can get wifi signal on the outside of my window but not in the house. The solution i use at the moment is a usb wifi dongle outside connected to my laptop but the lack of an internal antenna make the connection quite unreliable at times. So i was trying to find another solution to improve the reception of my network. One idea is to setup a router on the outside (in a place with stronger signal) and redirect the connection inside the house with an ethernet cable but the problem is that our Uni Wifi is managed by a capitve portal (BlueSocket with DNS redirection to login page) and the authentication has to happen on the mac address that connect to the net (so the client appliance in this case). If I use a router with Mac-Clone capability i will be able to be redirected trough the captive portal on my laptop computer and login from there or i need to setup my router to fill in the login page by itself? There are other hardware/software solutions i can use to get what i want? Thank you all

    Read the article

  • Self Hosted Dropbox Alternative?

    - by Hutch
    Does anyone know of any self-hosted Dropbox alternatives? We have a need to share files/folders between staff and partners (small scale) and for various reasons we'd prefer to host it ourselves. Sharepoint seems a little too focussed on "check in/check out" and things like webdav/ftp seem a little kludgy. In an ideal world something where you (as an IT person) can setup an area, make a user "owner" and from there they can add their customers would be great. Windows or Virtual Appliance would be ideal.

    Read the article

  • Host CPID, Geast CPUID and UserCPUID / what are they?

    - by amir.csco
    i found out that there are some IDs associated to the CPUIDs in vmx file of each virtual machine, these IDs are; hostCPUID.{Num} hostCPUID.80000001 guestCPUID.{Num} guestCPUID.80000001 userCPUID.{Num} userCPUID.80000001 i had some examination and search and i found out that guestCPUID and userCPUID are the same but hostCPUID always is different, Also i realized that these IDs are 32 hexadecimal characters that contains EDX, EAX, ECX and EDX i just want to know why hostCPUID is different from two other IDs?? and what is the different between these format of IDs and another format that explain in VMware documents ( cpuid.{Num}.edx or cpuid.{Num}.eax ) that written in binary codes not hexadecimal?? also i need to know why there are no CPUIDs in vmx file of some virtual appliance that often are available in OVF/OVA format and we can just deploy it?? Best Regards,

    Read the article

  • New Marketing Kits Available

    - by Cinzia Mascanzoni
    New marketing kits are available on the OPN portal. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Oracle Optimized DataCenter Oracle Storage for Oracle Database and Engineered Systems StorageTek SL150 - New Scalable Storage Solutions for Growing Businesses Extreme Database Performance meets Its Backup and Recovery Match with Oracle's Sun ZFS Backup Appliance Maximize Value and Business Agility through Data Center Virtualization Be A Content King with Oracle WebCenter Content

    Read the article

  • Extend university wifi network

    - by asfasdoiuh ouhouhouh
    i live in a university campus and i can get wifi signal on the outside of my window but not in the house. The solution i use at the moment is a usb wifi dongle outside connected to my laptop but the lack of an internal antenna make the connection quite unreliable at times. So i was trying to find another solution to improve the reception of my network. One idea is to setup a router on the outside (in a place with stronger signal) and redirect the connection inside the house with an ethernet cable but the problem is that our Uni Wifi is managed by a capitve portal (BlueSocket with DNS redirection to login page) and the authentication has to happen on the mac address that connect to the net (so the client appliance in this case). If I use a router with Mac-Clone capability i will be able to be redirected trough the captive portal on my laptop computer and login from there or i need to setup my router to fill in the login page by itself? There are other hardware/software solutions i can use to get what i want?

    Read the article

  • Building intranet search

    - by gmkv
    At work, we have lots of information squirreled away in many different sites -- wikis, product docs, ticketing system, etc -- many of which require authentication. I'm very interested in having a single way to search all our various silos, and in my spare time have looked at Nutch, Grub, Django + Haystack, etc. None of these is a complete solution a la Google Mini or Google Search Appliance. Has anybody built a basic intranet search engine out of a mixture of these tools? Would you have recommendations about how to go about it? I like Django, and Haystack seems to be a mildly popular search solution for it, but I'd need to wire up a crawler that can support crawling authenticated sites to it.

    Read the article

  • Bring on the Cheer, Oracle’s Q3 is Here

    - by Kristin Rose
    November is long gone and December is near… this must mean OPN’s Q2 Winter Wrap-Up is here! Listed below are just a few of the highlights from Oracle’s past three months… Yet another successful Oracle OpenWorld 2012 and the launch of our first ever Oracle PartnerNetwork Exchange program! Get the recap. Our exciting Java Embedded @ JavaOne event. Get the low-down here! The debut of our new Oracle Cloud programs for partners, which have already created some awesome buzz in the Channel. Check out the CRN article, and don’t forget to watch the Cloud Programs Overview video and visit our OPN Cloud Knowledge Zone! On the product front, Oracle’s Sun ZFS Storage Appliance was awarded the 2012 Tech Innovator and Enterprise App Award by CRN. Read the full article. Oracle partner, Hitachi Consulting, reached OPN’s premier Diamond Level status. Read more. Was Oracle part of your September, October or November highlights? If so, leave us a comment below, we’d love to feature your story! Also, don’t forget to share the love by re-tweeting this post on Twitter or “liking” this post on Facebook! Stay Warm, The OPN Communications Team 

    Read the article

  • ACT On' OVCA for Cloud Providers Program Launch Webcast: June 12, 2014 - 9am UKT / 10am CET / 11am EET

    - by Cinzia Mascanzoni
    Normal 0 false false false EN-US X-NONE X-NONE We invite you to join the OVCA for Cloud Providers ‘ACT On' program launch at 11am BST / 12noon CET on June 12. · More and more customers realize the value of shifting to a Converged IT Infrastructure, this is why IDC expects this market to grow 40% annually for the next 2 years. · The Oracle Virtual Compute Appliance (OVCA) with attached ZFS storage is the perfect answer to this market trend. By providing rapid application and cloud deployment, OVCA allows customers to cut capital expenditures by up to 50% and deploy key applications up to 7x faster. · For Partners, OVCA supports their journey to consolidation, virtualization and cloud, and allows them to sell higher value services to their customers. The objective of this webcast is to share with you the OVCA value proposition, help you identify the best target partners, and provide you with the Enablement and Demand Generation content and resources. To register and for further details click here /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Move and clone VirtualBox machines with filesystem commands

    - by mit
    I know of 2 ways to clone a VirtualBox machine on a linux host, one is by using the VirtualBox gui and exporting and re-importing as Appliance (in the file menu of VirtualBox). The other is by cloning only the virtual disk containers: VBoxManage clonevdi source.vdi target.vdi (Taken from http://forums.virtualbox.org/viewtopic.php?p=853#p858 ) I would have to create a new VM afterwards and use the cloned virtual disk. Is there a way I can just copy a virtual disk and the and do the rest by hand? I'd have to manually edit the ~/VirtualBox/VirtualBox.xml and insert a new disk and a new machine: Can I just make up UUIDs or how would this work? I would very much prefer this hardcore method of doing things as it allows me to freely and rapdily backup, restore, move or clone machines. Or ist there a better way to do this?

    Read the article

  • So You Want To Build a SPARC Cloud

    - by user12601629
    Did you ever wish you could get the industrial strength power of UNIX/RISC with the flexibility of cloud computing?  Well, now you can!  With recent advances from Oracle it's possible to build an incredibly high-performance, flexible, available virtualized infrastructure based on Solaris and SPARC.  Here's the recipe! Authored in collaboration across the Oracle "Systems Group" team, we now have a complete best practice guide for you.  Click below to download it: Best Practices for Building a Virtualized SPARC Computing Environment Inside you'll find recommendations for how and when to leverage technologies like: SPARC T4 OVM for SPARC hypervisor (version 2.2 and newer) Solaris 11 Ops Center 12c ZFS Storage Appliance Oracle network switches Based on following these best practices, you'll be able to construct a dynamic, virtualized infrastructure that allows for: Easy, GUI-based provisioning on new VMs Automated HA failover in the event of physical server failures Automatic load balancing across a cluster of VM hosts Complete end-to-end monitoring You should download this paper and check it out.  Even if you aren't planning on buying all new hardware, and instead want to transform some existing gear into a dynamic virtualized environment then this paper will give you concrete info on what to do and the trade-offs you'll make. Have fun getting started on your journey to build a SPARC cloud!

    Read the article

  • Oracle Open World 2012?????

    - by Liu Maclean(???)
    Oracle Open World 2012?????: ???.. Oracle OpenWorld 2012 sessions????:Search Content Catalog for Oracle OpenWorld 2012 sessions ?????????session??? Open World 2012??: Larry ??Exadata X3 OOW 2012???Exadata X3,?? X3-2 ?Expansion Rack X3-2?X3-8 Exadata X3????:http://www.oracle.com/us/products/database/exadata/overview/index.html  ORACLE EXADATA Database MACHINE X3-8 sheetORACLE EXADATA Database MACHINE X3-2 sheet Exadata X3-2???????: X3-2?compute db node?????????8?Intel Xeon E5-2690??? ??????????12????16?,???33%????? ???96GB???128GB,????256GB ??????????50% X3-2 cell node??????????????Intel Xeon ??????flash card flash card??????4?,??flash card?????????40%? ???X3-2???22.4TB?flash ,??????flash????????????????????,???10????? CPU???6?,????????Intel Xeon model ????????X2-2??,???600GB???????3TB?????? ??Exadata X3-2?????????,??????????1/4?????,1/8????????????????? Exadata X3-8???????: X3-8???X2-8?????,???X3-8??????????X3-2??,??X3-8?????22.4TB?????? ???CEO??  Engineered to Work Together OOW????? Oracle Open World 2012 ????? Open World 2012 ??:http://www.oracle.com/openworld/index.htmlOpen World 2012 ????:http://www.oracle.com/openworld/register/packages/index.html ??: Sept. 30 – Oct. 4, 2012 9?30?? 10?4? ??:Moscone Center, San Francisco (747 Howard Street, San Francisco, California 94103). ?????Mark Hurd??OOW 2012: How big is oow OOW 2012?????????: Focus On Database Technologies Focus On Real Application Clusters Focus On Exadata Focus On Oracle Database Appliance Focus On Oracle Database Application Development Focus On Oracle Database Security Focus On Big Data Focus On Data Warehousing Focus On High Availability Focus On Oracle Enterprise Manager Cloud Control 12c (and Private Cloud) Focus On Oracle Spatial and Graph Focus On Oracle Database Utilities Focus On Oracle Database Upgrade Focus On Oracle Database Private Cloud Focus On .Net Focus On Oracle Database on Windows Focus On Engineered Systems Focus On Sunday Users Forum

    Read the article

  • Open Source or Low Cost Layer 7 ("Content") Switch?

    - by Rob
    I have several web servers that host a number of different applications and web sites. I want to make it easy to host apps or parts of web sites on different servers (e.g. example.com/foo might be on one physical server and example.com/bar might be on another). We do this Apache redirects right now, but that gets messy fast and in any case we have other problems we want to solve, such as throttling requests from individual clients, and reducing dependency on specific physical hosts. Is there an open source or low cost layer 7 switch that would be suitable for this sort of task? I was hoping to find something like a stripped down Linux VMware guest/appliance built for this purpose, but haven't seen anything suitable out there so far.

    Read the article

  • Device CAL, User Cal or Processor license needed for SQL 2008 (architecture explained inside)?

    - by nycgags
    So we have a number of servers in the Amazon cloud running SQL Server Standard edition to aggregate data. For that purpose we are fine, the licensing is handled by our contract with Amazon, no problem there. For the beefier work, we want to install Enterprise Edition (EE) on our servers processing raw data so that we can take advantage of table partitioning. We currently have 3 servers aggregating data from about 40 node servers, all 43 of these servers are running standard edition which is fine. We also have 4 servers running standard processing the raw data, but I think we can get away with 2 (for redundancy) running Enterprise Edition. We have 2-3 dba's that access these DW servers for maintenance (using the same windows login via remote desktop). So visually: 40 -- 3 -- [2] -- 2 -- 1 nodes -- aggregators -- raw (which we want to run EE) -- calculators -- datawarehouse Nodes PUSH to aggregators, Raws PULL from aggregators, Calculators PULL from Raw, Calculators PUSH to datwarehouse I am specifying the push vs. pull in case that changes how the # of licenses is calculated. Q1) how many device (or user) CAL's do we need? Q2) do I need to speak with someone from MSFT to find out if it is ok to install in the Amazon Cloud (Amazon said we need to verify it is ok in our license terms)? Q3) what happens if another device tries to access a server with the limited number of device CAL's? Q4) Are the device CAL's simultaneous number of devices or total? Q5) Do Device and User CAL's cost the same or is there a difference? Q6) Would we need to buy a processor license (we are hoping not to)?

    Read the article

  • When to implement: Together with or after the source product?

    - by Jeremy Oosthuizen
    Somebody recently relayed a prospect's question to me: How hard would it be to implement OUBI after the source product (CC&B, WAM or NMS) has already been implemented? Fact is that MOST non-OUBI Data Warehouse / Business Intelligence implementations take place after the source application(s) are in place and hopefully stable. If an organization decides that they need better reporting and management information, then the logical path (see The Data Warehouse Institute's Data Warehouse Maturity Model) is to a Data Warehouse -- no matter when their last applications were implemented. If there is a pre-built Data Warehouse for their specific application, or even for the desired business process in their industry, they're in luck. Else they have to design and build from scratch, using a toolset. The implementation of a toolset is unlike the implementation of OUBI which, like OBI Apps, contain pre-built ETL routines and user content. Much has been written before about the advantages of that. So, because OUBI is designed specifically for Oracle Utilities transactional products, we often implement them in parallel -- with OUBI lagging a little behind by necessity, like Reporting. Customers know from the start they're going to need the solution, and therefore purchase the products at the same time. My biggest argument FOR a parallel installation/implementation of OUBI with the source product is two-fold: - There could be things (which is the technical term for data elements) that customers figure out they need when implementing OUBI, which are often easier added to the source product's implementation project, than to add later; - OUBI's ETL often points out errors (severe or not) with converted data, which are easier to fix during the source product's implementation project, or it may even be impossible to fix afterwards. The Conversion routines sometimes miss these errors, because the source system can live with the not-quite-perfect converted data. If the data can't be properly extracted, i.e. the proper Dimensions linked to the Facts, then it can't get into OUBI. That means it can't be analyzed effectively along with the rest of the organization's data. Then there is also the throw-away-work argument, which may be significant. The operational / transactional system cannot go live without reports on Day 1. A lot of those reports would be taken care of by the implementation of OUBI. If OUBI is implemented after go-live, those reports STILL have to be built during the source product's implementation project, but they become throw-away after the OUBI implementation. I have sometimes been told that it is better to implement OUBI after the source product, because it cuts down on scope and risk for the source product's implementation project. All I can say to that, is bah humbug. No, seriously, given the arguments above, planning has to include the OUBI implementation and it has to be managed properly -- just like any other implementation. If so, it should not add any risk and it should be included in the scope from the start. The answer to the prospect's question is therefore that it is not that much more difficult; after all, most DW/BI implemenations are done like that. They just have to consider the points above.

    Read the article

  • Building a Data Mart with Pentaho Data Integration Video Review by Diethard Steiner, Packt Publishing

    - by Compudicted
    Originally posted on: http://geekswithblogs.net/Compudicted/archive/2014/06/01/building-a-data-mart-with-pentaho-data-integration-video-review.aspx The Building a Data Mart with Pentaho Data Integration Video by Diethard Steiner from Packt Publishing is more than just a course on how to use Pentaho Data Integration, it also implements and uses the principals of the Data Warehousing (and I even heard the name of Ralph Kimball in the video). Indeed, a video watcher should be familiar with its concepts as the Star Schema, Slowly Changing Dimension types, etc. so I suggest prior to watching this course to consider skimming through the Data Warehouse concepts (if unfamiliar) or even better, read the excellent Ralph’s The Data Warehouse Tooolkit. By the way, the author expands beyond using Pentaho along to MySQL and MonetDB which is a real icing on the cake! Indeed, I even suggest the name of the course should be ‘Building a Data Warehouse with Pentaho’. To successfully complete the course one needs to know some Linux (Ubuntu used in the course), the VI editor and the Bash command shell, but it seems that similar requirements would also apply to the Weindows OS. Additionally, knowing some basic SQL would not hurt. As I had said, MonetDB is used in this course several times which seems to be not anymore complex than say MySQL, but based on what I read is very well suited for fast querying big volumes of data thanks to having a columnstore (vertical data storage). I don’t see what else can be a barrier, the material is very digestible. On this note, I must add that the author does not cover how to acquire the software, so here is what I found may help: Pentaho: the free Community Edition must be more than anyone needs to learn it. Or even go into a POC. MonetDB can be downloaded (exists for both, Linux and Windows) from http://goo.gl/FYxMy0 (just see the appropriate link on the left). The author seems to be using Eclipse to run SQL code, one can get it from http://goo.gl/5CcuN. To create, or edit database entities and/or schema otherwise one can use a universal tool called SQuirreL, get it from http://squirrel-sql.sourceforge.net.   Next, I must confess Diethard is very knowledgeable in what he does and beyond. However, there will be some accent heard to the user of the course especially if one’s mother tongue language is English, but it I got over it in a few chapters. I liked the rate at which the material is being presented, it makes me feel I paid for every second Eventually, my impressions are: Pentaho is an awesome ETL offering, it is worth learning it very much (I am an ETL fan and a heavy user of SSIS) MonetDB is nice, it tickles my fancy to know it more Data Warehousing, despite all the BigData tool offerings (Hive, Scoop, Pig on Hadoop), using the traditional tools still rocks Chapters 2 to 6 were the most fun to me with chapter 8 being the most difficult.   In terms of closing, I highly recommend this video to anyone who needs to grasp Pentaho concepts quick, likewise, the course is very well suited for any developer on a “supposed to be done yesterday” type of a project. It is for a beginner to intermediate level ETL/DW developer. But one would need to learn more on Data Warehousing and Pentaho, for such I recommend the 5 star Pentaho Data Integration 4 Cookbook. Enjoy it! Disclaimer: I received this video from the publisher for the purpose of a public review.

    Read the article

  • Building a Data Mart with Pentaho Data Integration Video Review by Diethard Steiner, Packt Publishing

    - by Compudicted
    Originally posted on: http://geekswithblogs.net/Compudicted/archive/2014/06/01/building-a-data-mart-with-pentaho-data-integration-video-review-again.aspx The Building a Data Mart with Pentaho Data Integration Video by Diethard Steiner from Packt Publishing is more than just a course on how to use Pentaho Data Integration, it also implements and uses the principals of the Data Warehousing (and I even heard the name of Ralph Kimball in the video). Indeed, a video watcher should be familiar with its concepts as the Star Schema, Slowly Changing Dimension types, etc. so I suggest prior to watching this course to consider skimming through the Data Warehouse concepts (if unfamiliar) or even better, read the excellent Ralph’s The Data Warehouse Tooolkit. By the way, the author expands beyond using Pentaho along to MySQL and MonetDB which is a real icing on the cake! Indeed, I even suggest the name of the course should be ‘Building a Data Warehouse with Pentaho’. To successfully complete the course one needs to know some Linux (Ubuntu used in the course), the VI editor and the Bash command shell, but it seems that similar requirements would also apply to the Windows OS. Additionally, knowing some basic SQL would not hurt. As I had said, MonetDB is used in this course several times which seems to be not anymore complex than say MySQL, but based on what I read is very well suited for fast querying big volumes of data thanks to having a columnstore (vertical data storage). I don’t see what else can be a barrier, the material is very digestible. On this note, I must add that the author does not cover how to acquire the software, so here is what I found may help: Pentaho: the free Community Edition must be more than anyone needs to learn it. Or even go into a POC. MonetDB can be downloaded (exists for both, Linux and Windows) from http://goo.gl/FYxMy0 (just see the appropriate link on the left). The author seems to be using Eclipse to run SQL code, one can get it from http://goo.gl/5CcuN. To create, or edit database entities and/or schema otherwise one can use a universal tool called SQuirreL, get it from http://squirrel-sql.sourceforge.net.   Next, I must confess Diethard is very knowledgeable in what he does and beyond. However, there will be some accent heard to the user of the course especially if one’s mother tongue language is English, but it I got over it in a few chapters. I liked the rate at which the material is being presented, it makes me feel I paid for every second Eventually, my impressions are: Pentaho is an awesome ETL offering, it is worth learning it very much (I am an ETL fan and a heavy user of SSIS) MonetDB is nice, it tickles my fancy to know it more Data Warehousing, despite all the BigData tool offerings (Hive, Scoop, Pig on Hadoop), using the traditional tools still rocks Chapters 2 to 6 were the most fun to me with chapter 8 being the most difficult.   In terms of closing, I highly recommend this video to anyone who needs to grasp Pentaho concepts quick, likewise, the course is very well suited for any developer on a “supposed to be done yesterday” type of a project. It is for a beginner to intermediate level ETL/DW developer. But one would need to learn more on Data Warehousing and Pentaho, for such I recommend the 5 star Pentaho Data Integration 4 Cookbook. Enjoy it! Disclaimer: I received this video from the publisher for the purpose of a public review.

    Read the article

  • Reduce ERP Consolidation Risks with Oracle Master Data Management

    - by Dain C. Hansen
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Reducing the Risk of ERP Consolidation starts first and foremost with your Data.This is nothing new; companies with multiple misaligned ERP systems are often putting inordinate risk on their business. It can translate to too much inventory, long lead times, and shipping issues from poorly organized and specified goods. And don’t forget the finance side! When goods are shipped and promises are kept/not kept there’s the issue of accounts. No single chart of counts translates to no accountability. So – I’ve decided. I need to consolidate! Well, you can’t consolidate ERP applications [for that matter any of your applications] without first considering your data. This means looking at how your data is being integrated by these ERP systems, how it is being synchronized, what information is being shared, or not being shared. Most importantly, making sure that the data is mastered. What is the best way to do this? In the recent webcast: Reduce ERP consolidation Risks with Oracle Master Data Management we outlined 3 key guidelines: #1: Consolidate your Product Data#2: Consolidate your Customer, Supplier (Party Data) #3: Consolidate your Financial Data Together these help customers achieve reduced risk, better customer intimacy, reducing inventory levels, elimination of product variations, and finally a single master chart of accounts. In the case of Oracle's customer Zebra Technologies, they were able to consolidate over 140 applications by mastering their data. Ultimately this gave them 60% cost savings for the year on IT spend. Oracle’s Solution for ERP Consolidation: Master Data Management Oracle's enterprise master data management (MDM) can play a big role in ERP consolidation. It includes a set of products that consolidates and maintains complete, accurate, and authoritative master data across the enterprise and distributes this master information to all operational and analytical applications as a shared service. It’s optimized to work with any application source (not just Oracle’s) and can integrate using technology from Oracle Fusion Middleware (i.e. GoldenGate for data synchronization and real-time replication or ODI with its E-LT optimized bulk data and transformation capability). In addition especially for ERP consolidation use cases it’s important to leverage the AIA and SOA capabilities as part of Fusion Middleware to connect these multiple applications together and relay the data into the correct hub. Oracle’s MDM strategy is a unique offering in the industry, one that has common elements across the top and bottom in Middleware, BI/DW, Engineered systems combined with Enterprise Data Quality to enable comprehensive Data Governance at all levels. In addition, Oracle MDM provides the best-in-class capabilities to master all variations of data, including customer, supplier, product, financial data. But ultimately at the center of Oracle MDM is your data, making it more trusted, making it secure and accessible as part of a role-based approach, and getting it to make sense to you in any situation, whether it’s a specific ERP process like we talked about or something that is custom to your organization. To learn more about these techniques in ERP consolidation watch our webcast or goto our Oracle MDM website at www.oracle.com/goto/mdm

    Read the article

  • The Best Data Integration for Exadata Comes from Oracle

    - by maria costanzo
    Oracle Data Integrator and Oracle GoldenGate offer unique and optimized data integration solutions for Oracle Exadata. For example, customers that choose to feed their data warehouse or reporting database with near real-time throughout the day, can do so without decreasing  performance or availability of source and target systems. And if you ask why real-time, the short answer is: in today’s fast-paced, always-on world, business decisions need to use more relevant, timely data to be able to act fast and seize opportunities. A longer response to "why real-time" question can be found in a related blog post. If we look at the solution architecture, as shown on the diagram below,  Oracle Data Integrator and Oracle GoldenGate are both uniquely designed to take full advantage of the power of the database and to eliminate unnecessary middle-tier components. Oracle Data Integrator (ODI) is the best bulk data loading solution for Exadata. ODI is the only ETL platform that can leverage the full power of Exadata, integrate directly on the Exadata machine without any additional hardware, and by far provides the simplest setup and fastest overall performance on an Exadata system. We regularly see customers achieving a 5-10 times boost when they move their ETL to ODI on Exadata. For  some companies the performance gain is even much higher. For example a large insurance company did a proof of concept comparing ODI vs a traditional ETL tool (one of the market leaders) on Exadata. The same process that was taking 5hrs and 11 minutes to complete using the competing ETL product took 7 minutes and 20 seconds with ODI. Oracle Data Integrator was 42 times faster than the conventional ETL when running on Exadata.This shows that Oracle's own data integration offering helps you to gain the most out of your Exadata investment with a truly optimized solution. GoldenGate is the best solution for streaming data from heterogeneous sources into Exadata in real time. Oracle GoldenGate can also be used together with Data Integrator for hybrid use cases that also demand non-invasive capture, high-speed real time replication. Oracle GoldenGate enables real-time data feeds from heterogeneous sources non-invasively, and delivers to the staging area on the target Exadata system. ODI runs directly on Exadata to use the database engine power to perform in-database transformations. Enterprise Data Quality is integrated with Oracle Data integrator and enables ODI to load trusted data into the data warehouse tables. Only Oracle can offer all these technical benefits wrapped into a single intelligence data warehouse solution that runs on Exadata. Compared to traditional ETL with add-on CDC this solution offers: §  Non-invasive data capture from heterogeneous sources and avoids any performance impact on source §  No mid-tier; set based transformations use database power §  Mini-batches throughout the day –or- bulk processing nightly which means maximum availability for the DW §  Integrated solution with Enterprise Data Quality enables leveraging trusted data in the data warehouse In addition to Starwood Hotels and Resorts, Morrison Supermarkets, United Kingdom’s fourth-largest food retailer, has seen the power of this solution for their new BI platform and shared their story with us. Morrisons needed to analyze data across a large number of manufacturing, warehousing, retail, and financial applications with the goal to achieve single view into operations for improved customer service. The retailer deployed Oracle GoldenGate and Oracle Data Integrator to bring new data into Oracle Exadata in near real-time and replicate the data into reporting structures within the data warehouse—extending visibility into operations. Using Oracle's data integration offering for Exadata, Morrisons produced financial reports in seconds, rather than minutes, and improved staff productivity and agility. You can read more about Morrison’s success story here and hear from Starwood here. From an Irem Radzik article.

    Read the article

  • The LoadLibraryA method returns error code 1114 (ERROR_DLL_INIT_FAILED) after more than 1000 cycles

    - by Javier
    Hi, I'm programing on C++, I'm using Visual Studio 2008, Windows XP, and I have the following problem: My application, that is a DLL that can be used from Python, loads an external dll, uses the required methods, and then unloads this external Dll. It's working properly, but after more than 1000 cycles the method "LoadLibraryA" returns a NULL reference. The main steps are: HINSTANCE h = NULL; h = LoadLibraryA(dllfile.c_str()); DWORD dw = GetLastError(); The error got is: ERROR_DLL_INIT_FAILED 1114 (0x45A) A dynamic link library (DLL) initialization routine failed. The Dll is unloaded by using the following: FreeLibrary(mDLL); mDLL = NULL; Where mDLL is defined like this: HINSTANCE mDLL; First alternative tried: Just load the Dll only once, and unloaded it when the application ends. This fix the problem but introduces a new one. When the application ends, instead of first executing the DllMain method of my applicaion, wich unloads the external DLL, is executing first the DllMain method of the other Dll. This cause the following error because my application is trying to unload a Dll that was unload by itself previously. "Unhandled exception at 0x04a00d07 (DllName.DLL) in Python.exe: 0xC0000005: Access violation reading location 0x0000006b". Any suggestion will be welcomed. Thanks in advance. Regards.

    Read the article

  • External API function calls AS3 control timeline

    - by giles
    I have function problem using this code (below), the embedded flash movieclip disappears or completely prevents the scrollto.js query to function in DW cs3. Communication between Flash and JavaScript is without problems, it is the call back I can't find to work and more frustratingly, should be simple, as no values are not required. So far, this has been hours of scouring the net without a workable end in sight...ahrr. What is a function for this to work? JavaScript – to call Flash event from HTML button link, placed between head tags function callExternalInterface() var flashMovie = window.document.menu; flashMovie.menu_up(value); menu_up is the string. Does anyone know of workable function for callback?? HTML <div id="btn_up"><a href="#top" name="charDev" id="charDev" onclick="">top</a></div> Pane navigation div that uses Scrollto.js query, and it's this link I need calling back to the embedded "menubtns.swf" (nested in "AS3Menu_javascript.swf") to play 5 frames of this movieclip, via a JS function. Embedded .swf code, using swfobject.js with allowScriptAccess=always <object classid="clsid:D27CDB6E-AE6D-11cf-96B8-444553540000" name="menu"<br/> width="251" height="251" id="menu"> <param name="movie" value="../~Assets/Flash/AS3Menu_javascript.swf" /> <param name="allowScriptAccess" value="always" /> <param name="movie" value="ExternalInterfaceScript.swf" /> <param name="quality" value="high" /> <object type="application/x-shockwave-flash" data="../~Assets/Flash/AS3Menu_javascript.swf" width="250" height="250"> <p>Alternative content</p> </object> </object> AS3 / Flash import flash.external.ExternalInterface;flash.system.Security.allowDomain(/sourceDomain/); ExternalInterface.addCallBack("menu_up", this, resetmenu); function resetmenu(){ gotoAndPlay:("frame label" / "number") }

    Read the article

  • Opencv application crashes at runtime with error code 0x0000142

    - by Tuan Anh
    I have openCV and minGW installed with codeblock IDE following the instructions found here http://kevinhughes.ca/tutorials/opencv-install-on-windows-with-codeblocks-and-mingw/ i tried the simple image loading program in the article and the build process went fine. but when i tried running the output program, it crashes with the error message "the application was unable to start correctly (0xc0000142). Click OK to close the application." I used Dependency Walker to see if the program failed to load dll module and here's the output screen of Dependency Walker https://www.dropbox.com/s/f9iaftdt8atjwpl/Screenshot%202013-11-05%2022.21.45.png i am not used to DW but as i can see in its output screen, some openCV dll failed to load and the loaded Windows DLL were 64 bit instead of 32 bit (as minGW is 32 bit). I can't figure out why as i already configure the Path environment variable for the bin directory of openCV and the app still can not load the dll modules. And i think that Windows will automatically load the proper 32 bit DLLs when a 32 bit app is run but this situation the app still failed to load. Anyone has ideas?

    Read the article

  • MACRO compilation PROBLEM

    - by wildfly
    i was given a primitive task to find out (and to put in cl) how many nums in an array are bigger than the following ones, (meaning if (arr[i] arr[i+1]) count++;) but i've problems as it has to be a macro. i am getting errors from TASM. can someone give me a pointer? SortA macro a, l LOCAL noes irp reg, <si,di,bx> push reg endm xor bx,bx xor si,si rept l-1 ;;also tried rept 3 : wont' compile mov bl,a[si] inc si cmp bl,arr[si] jb noes inc di noes: add di,0 endm mov cx,di irp reg2, <bx,di,si> pop reg2 endm endm dseg segment arr db 10,9,8,7 len = 4 dseg ends sseg segment stack dw 100 dup (?) sseg ends cseg segment assume ds:dseg, ss:sseg, cs:cseg start: mov ax, dseg mov ds,ax sortA arr,len cseg ends end start errors: Assembling file: sorta.asm **Error** sorta.asm(51) REPT(4) Expecting pointer type **Error** sorta.asm(51) REPT(6) Symbol already different kind: NOES **Error** sorta.asm(51) REPT(10) Expecting pointer type **Error** sorta.asm(51) REPT(12) Symbol already different kind: NOES **Error** sorta.asm(51) REPT(16) Expecting pointer type **Error** sorta.asm(51) REPT(18) Symbol already different kind: NOES Error messages: 6

    Read the article

  • Vim: change formatting of variables in a script

    - by sixtyfootersdude
    I am using vim to edit a shell script (did not use the right coding standard). I need to change all of my variables from camel-hum-notation startTime to caps-and-underscore-notation START_TIME. I do not want to change the way method names are represented. I was thinking one way to do this would be to write a function and map it to a key. The function could do something like generating this on the command line: s/<word under cursor>/<leave cursor here to type what to replace with> I think that this function could be applyable to other situations which would be handy. Two questions: Question 1: How would I go about creating that function. I have created functions in vim before the biggest thing I am clueless about is how to capture movement. Ie if you press dw in vim it will delete the rest of a word. How do you capture that? Also can you leave an uncompleted command on the vim command line? Question 2: Got a better solution for me? How would you approach this task?

    Read the article

  • Data Warehouse: Modelling a future schedule

    - by Pat
    I'm creating a DW that will contain data on financial securities such as bonds and loans. These securities are associated with payment schedules. For example, a bond could pay quarterly, while a mortage would usually pay monthly (sometimes biweekly). The payment schedule is created when the security is traded and, in the majority of cases, will remain unchanged. However, the design would need to accomodate those cases where it does change. I'm currently attempting to model this data and I'm having difficulty coming up with a workable design. One of the most commonly queried fields is "next payment date". Users often want to know when a security will pay next. Therefore, I want to make it as easy as possible for them to get the next payment date and amount for each security. Also, users often run historical queries in which case they'd want the next payment date and amount as of a specific point in time. For example, they may want to look back at 1/31/09 and query the next payment dates (which would usually be in February 2009 for mortgages). It's also common that they want to query a security's entire payment schedule, which might consist of 360 records (30 year mortgage x 12 payments/year). Since the next payment date and amount would be changing each month or even biweekly, these fields wouldn't seem to fit into a slow-changing dimension very well. It would probably make more sense to use a fact table, but I'm unsure of how to model it. Any ideas would be greatly appreciated.

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24  | Next Page >