Search Results

Search found 25727 results on 1030 pages for 'solution'.

Page 470/1030 | < Previous Page | 466 467 468 469 470 471 472 473 474 475 476 477  | Next Page >

  • Using Oracle WebCenter Content for Solving Government Content-Centric Business Problems

    - by Lance Shaw
    Organizations are seeing unprecedented amounts of unstructured information such as documents, images, e-mails, and rich media files. Join us December 12th to learn about how Oracle WebCenter Content can help you provide better citizen services by managing the content lifecycle, from creation to disposition, with a single repository.  With Oracle WebCenter Content, organizations can address any content use case, such as accounts payable, HR on-boarding, document management, compliance, records management, digital asset management, or website management.  If you have multiple content silos and need a strategy for consolidating your unstructured content to reduce costs and complexity, please join us to hear from Shahid Rashid, Oracle WebCenter Development, and Oracle Pillar Partner, Fishbowl Solutions, and learn how you can create the foundation for content-centric business solutions.  •        Solve the problem of multiple content silos (content systems, file systems, workspaces) •        Fully leverage your content across applications, processes and departments •        Create a strategy for consolidating your unstructured content to reduce costs and infrastructure complexity •        Comply with regulations and provide audit trails while remaining agile •        Provide a complete and integrated solution for managing content directly from Oracle Applications (E-Business Suite, PeopleSoft, Siebel, JD Edwards) Join us on December 12th at 2pm ET, 11am PT to learn more!

    Read the article

  • Where to store short strings (with my key) on the internet?

    - by Vi
    Is there simple service to store strings under my key that can be used by bots? Requirements: Simple command line access, automatic posting allowed No need to keep some session with the service alive I choose the key (so pastebins fail) No requirement for registration/authentication (for simplicity) The string should be kept for about a month. I want something like: Store: $ echo some_data_0x1299C0FF | store_my_string testtest2011 Retrieve: $ retrive_my_string testtest2011 some_data_0x1299C0FF Do you have ideas what should I use for it? I can only think of using IRC somehow (channel topics, /whowas, ...), but this is too complex for this simple task. No security is needed: anyone can update my string. The task looks very simple, so I expect the solution to be similarly simple. Expecting something like single simple curl call.

    Read the article

  • Project Euler 12: (Iron)Python

    - by Ben Griswold
    In my attempt to learn (Iron)Python out in the open, here’s my solution for Project Euler Problem 12.  As always, any feedback is welcome. # Euler 12 # http://projecteuler.net/index.php?section=problems&id=12 # The sequence of triangle numbers is generated by adding # the natural numbers. So the 7th triangle number would be # 1 + 2 + 3 + 4 + 5 + 6 + 7 = 28. The first ten terms # would be: # 1, 3, 6, 10, 15, 21, 28, 36, 45, 55, ... # Let us list the factors of the first seven triangle # numbers: # 1: 1 # 3: 1,3 # 6: 1,2,3,6 # 10: 1,2,5,10 # 15: 1,3,5,15 # 21: 1,3,7,21 # 28: 1,2,4,7,14,28 # We can see that 28 is the first triangle number to have # over five divisors. What is the value of the first # triangle number to have over five hundred divisors? import time start = time.time() from math import sqrt def divisor_count(x): count = 2 # itself and 1 for i in xrange(2, int(sqrt(x)) + 1): if ((x % i) == 0): if (i != sqrt(x)): count += 2 else: count += 1 return count def triangle_generator(): i = 1 while True: yield int(0.5 * i * (i + 1)) i += 1 triangles = triangle_generator() answer = 0 while True: num = triangles.next() if (divisor_count(num) >= 501): answer = num break; print answer print "Elapsed Time:", (time.time() - start) * 1000, "millisecs" a=raw_input('Press return to continue')

    Read the article

  • How to use Binary Log file for Auditing and Replicating in MySQL?

    - by Pranav
    How to use Binary Log file for Auditing in MySQL? I want to track the change in a DB using Binary Log so that I can replicate these changes to other DB please do not give me hyperlinks for MySQL website. please direct me to find the solution EDIT I have looked for auditing options and created a script using Triggers for that, but due toi the Joomla DB structure it did'nt worked for me, hence I have to move on to Binary Log file concept now i am stucked in initiating the concept as I am not getting the concept of making the server master/slave, so can any body guide me how to actually initiate it via PHP?

    Read the article

  • E-Commerce website using blogging service

    - by Rohit
    I have been selling software products since last few years. I now want to sell them online. I have three ways in my mind: Using TypePad and integrating PayPal code in it. Get designed my own website. Buy a shopping cart online app. like Volusion etc. I am not selling all the products but only selected software products. I want to know which is the best solution in terms on cost, manageability and getting online response.

    Read the article

  • Getting mybrother MFC-J825DW working as a network scanner

    - by AntonChanning
    I've been attempting to set up my new brother multi-function device to work as a printer and scanner using the following steps. It is connected to the network as a LAN device, not directly connected to my ubuntu machine. Downloaded the lpr driver and cupswrapper driver from Brother support. (Select the deb packages, not the rpms). Followed the instructions to install the lpr driver. Followed the instructions to install the cupswrapper driver. After this point I was able to successfully perform a test print, so the printer part is working. So far I haven't had much luck getting the scanner working. This is what I've tried: Downloaded the brscan4 and scan-key-tool deb packages from brother support. Followed the instructions for installing the scanner driver for network. Followed the instructions for installing scan-key-tool. However when I tried to scan it detects no scanner. I then tried the solution offered in this answer to a question based on a similar brother printer, but no luck. I must have made a mistake somewhere along the line. Does anyone have any ideas what I can try to find out what? Or should I uninstall everything and start again from the beginning?

    Read the article

  • Accenture Foundation Platform for Oracle (AFPO) – Your pre-build & tested middleware platform

    - by JuergenKress
    The Accenture Foundation Platform for Oracle (AFPO) is a pre-built, tested reference application, common services framework and development accelerator for Oracle’s Fusion Middleware 11g product suite that can help to reduce development time and cost by up to 30 percent. AFPO is a unique accelerator that includes documentation, day one deliverables and quick start virtual machine images, along with access to a skilled team of resources, to reduce risk and cost while improving project quality. It can be delivered all at once or in stages, on-site, hosted, or as a cloud solution. Accenture recently released AFPO v5 for use with their clients. Accenture added significant updates in v5 including Day 1 images & documentation for Webcenter & ADF Mobile that are integrated with 30 other Oracle Middleware products that signifigantly reduced the services aspect to standing these products up. AFPO v5 also features rapid configuration and implementation capabilities for SOA/BPM integrated with Oracle WebCenter Portal, Oracle WebCenter Content, Oracle Business Intelligence, Oracle Identity Management and Oracle ADF Mobile.  AFPO v5 also delivers a starter kit for Oracle SOA Suite which builds upon the integration methodology, leading practices and extended tooling contained within the Oracle Foundation Pack. The combination of the AFPO starter kit and Foundation Pack jump-start and streamline Oracle SOA Suite implementation initiatives, helping to reduce the risk of deploying new technologies and making architectural decisions, so clients can ultimately reduce cost, risk and the time needed for an implementation.  You'll find more information at: Accenture's website:  www.accenture.com/afpo YouTube AFPO Telestration:  http://www.youtube.com/watch?v=_x429DcHEJs Press Release Brochure Contacts: [email protected] Patrick J Sullivan (Accenture – Global Oracle Technology Lead), [email protected] SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: AFPO,Accenture,middleware platform,oracle middleware,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Proper management of PGPool II

    - by Cathy
    Currently I have a site, with one Postgres database server. It is just for a select number of users (less than ten) but it needs the maximum uptime possible. I would like some kind of automatic failover for the database. So I was thinking something like: one server running PGPool II, one running Postgres as master, one running Postgres as slave. But then, if wherever PGPool is running suddenly loses power (or dies, or whatever), there's a single point of failure and the whole thing goes down. Is there a solution, assuming that outsourcing this to someone else isn't possible?

    Read the article

  • Taking web sites offline for demonstration

    While working in software development in general, and in web development for a couple of customers it is quite common that it is necessary to provide a test bed where the client is able to get an image, or better said, a feeling for the visions and ideas you are talking about. Usually here at IOS Indian Ocean Software Ltd. we set up a demo web site on one of our staging servers, and provide credentials to the customer to access and review our progress and work ad hoc. This gives us the highest flexibility on both sides, as the test bed is simply online and available 24/7. We can update the structure, the UI and data at any time, and the client is able to view it as it suits best for her/him. Limited or lack of online connectivity But what is going to happen when your client is not capable to be online - no matter for what reasons; here are some more obvious ones: No internet connection (permanently or temporarily) Expensive connection, ie. mobile data package, stay at a hotel, etc. Presentation devices at an exhibition, ie. using tablets or iPads Being abroad for a certain time, and only occasionally online No network coverage, especially on mobile Bad infrastructure, like ie. in Third World countries Providing a catalogue on CD or USB pen drive Anyway, it doesn't matter really. We should be able to provide a solution for the circumstances of our customers. Presentation during an exhibition Recently, we had the following request from a customer: Is it possible to let us have a desktop version of ResortWork.co.uk that we can use for demo purposes at the forthcoming Ski Shows? It would allow us to let stand visitors browse the sites on an iPad to view jobs and training directory course listings. Yes, sure we can do that. Eventually, you might think why don't they simply use 3G enabled iPads for that purpose? As stated above, there might be several reasons for that - low coverage, expensive data packages, etc. Anyway, it is not a question on how to circumvent the request but to deliver a solution to that. Possible solutions... or not? We already did offline websites earlier, and even established complete mirrors of one or two web sites on our systems. There are actually several possibilities to handle this kind of request, and it mainly depends on the system or device where the offline site should be available on. Here, it is clearly expressed that we have to address this on an Apple iPad, well actually, I think that they'd like to use multiple devices during their exhibitions. Following is an overview of possible solutions depending on the technology or device in use, and how it can be done: Replication of source files and database The above mentioned web site is running on ASP.NET, IIS and SQL Server. In case that a laptop or slate runs a Windows OS, the easiest way would be to take a snapshot of the source files and database, and transfer them as local installation to those Windows machines. This approach would be fully operational on the local machine. Saving pages for offline usage This is actually a quite tedious job but still practicable for small web sites Tool based approach to 'harvest' the web site There quite some tools in the wild that could handle this job, namely wget, httrack, web copier, etc. Screenshots bundled as PDF document Not really... ;-) Creating screencast or video Simply navigate through your website and record your desktop session. Actually, we are using this kind of approach to track down difficult problems in order to see and understand exactly what the user was doing to cause an error. Of course, this list isn't complete and I'd love to get more of your ideas in the comments section below the article. Preparations for offline browsing The original website is dynamically and data-driven by ASP.NET, and looks like this: As we have to put the result onto iPads we are going to choose the tool-based approach to 'download' the whole web site for offline usage. Again, depending on the complexity of your web site you might have to check which of the applications produces the best results for you. My usual choice is to use wget but in this case, we run into problems related to the rewriting of hyperlinks. As a consequence of that we opted for using HTTrack. HTTrack comes in different flavours, like console application but also as either GUI (WinHTTrack on Windows) or Web client (WebHTTrack on Linux/Unix/BSD). Here's a brief description taken from the original website about HTTrack: HTTrack is a free (GPL, libre/free software) and easy-to-use offline browser utility. It allows you to download a World Wide Web site from the Internet to a local directory, building recursively all directories, getting HTML, images, and other files from the server to your computer. HTTrack arranges the original site's relative link-structure. Simply open a page of the "mirrored" website in your browser, and you can browse the site from link to link, as if you were viewing it online. And there is an extensive documentation for all options and switches online. General recommendation is to go through the HTTrack Users Guide By Fred Cohen. It covers all the initial steps you need to get up and running. Be aware that it will take quite some time to get all the necessary resources down to your machine. Actually, for our customer we run the tool directly on their web server to avoid unnecessary traffic and bandwidth. After a couple of runs and some additional fine-tuning - explicit inclusion or exclusion of various external linked web sites - we finally had a more or less complete offline version available. A very handsome feature of HTTrack is the error/warning log after completing the download. It contains some detailed information about errors that appeared on the pages and the links within the pages that have been processed. Error: "Bad Request" (400) at link www.resortwork.co.uk/job-details_Ski_hire:tech_or_mgr_or_driver_37854.aspx (from www.resortwork.co.uk/Jobs_A_to_Z.aspx)Error: "Not Found" (404) at link www.247recruit.net/images/applynow.png (from www.247recruit.net/css/global.css)Error: "Not Found" (404) at link www.247recruit.net/activate.html (from www.247recruit.net/247recruit_tefl_jobs_network.html) In our situation, we took the records of HTTP 400/404 errors and passed them to the web development department. Improvements are to be expected soon. ;-) Quality assurance on the full-featured desktop Unfortunately, the generated output of HTTrack was still incomplete but luckily there were only images missing. Being directly on the web server we simply copied the missing images from the original source folder into our offline version. After that, we created an archive and transferred the file securely to our local workspace for further review and checks. From that point on, it wasn't necessary to get any more files from the original web server, and we could focus ourselves completely on the process of browsing and navigating through the offline version to isolate visual differences and functional problems. As said, the original web site runs on ASP.NET Web Forms and uses Postback calls for interaction like search, pagination and partly for navigation. This is the main field of improving the offline experience. Of course, same as for standard web development it is advised to test with various browsers, and strangely we discovered that the offline version looked pretty good on Firefox, Chrome and Safari, but not in Internet Explorer. A quick look at the HTML source shed some light on this, and there are conditional CSS inclusions based on the user agent. HTTrack is not acting as Internet Explorer and so we didn't have the necessary overrides for this browser. Not problematic after all in our case, but you might have to pay attention to this and get the IE-specific files explicitly. And while having a view at the source code, we also found out that HTTrack actually modifies the generated HTML output. In several occasions we discovered that <div> elements were converted into <table> constructs for no obvious reason; even nested structures. Search 'e'nd destroy - sed (or Notepad++) to the rescue During our intensive root cause analysis for a couple of HTML/CSS problems that needed some extra attention it is very helpful to be familiar with any editor that allows search and replace over multiple files like, ie. sed - stream editor for filtering and transforming text on Linux or my personal favourite Notepad++ on Windows. This allowed us to quickly fix a lot of anchors with onclick attributes and Javascript code that was addressed to ASP.NET files instead of their generated HTML counterparts, like so: grep -lr -e '.aspx' * | xargs sed -e 's/.aspx/.html?/g' The additional question mark after the HTML extension helps to separate the query string from the actual target and solved all our missing hyperlinks very fast. The same can be done in Notepad++ on Windows, too. Just use the 'Replace in files' feature and you are settled. Especially, in combination with Regular Expressions (regex). Landscape of browsers Okay, after several runs of HTML/CSS code analysis, searching and replacing some strings in a pool of more than 4.000 files, we finally had a very good match of an offline browsing experience in Firefox and Chrome on Linux. Next, we transferred that modified set of files to a Windows 8 machine for review on Firefox, Chrome and Internet Explorer 7 to 10, and a Mac mini running Mac OS X 10.7 to check the output on Safari and again on Chrome. Besides IE, for reasons already mentioned above, the results were identical. And last but not least it was about to check web site on tablets. Please continue to read on the following articles: Taking web sites offline for demonstration on Galaxy Tablet Taking web sites offline for demonstration on iPad

    Read the article

  • xinet vs iptables for port forwarding performance

    - by jamie.mccrindle
    I have a requirement to run a Java based web server on port 80. The options are: Web proxy (apache, nginx etc.) xinet iptables setuid The baseline would be running the app using setuid but I'd prefer not to for security reasons. Apache is too slow and nginx doesn't support keep-alives so new connections are made for every proxied request. xinet is easy to set up but creates a new process for every request which I've seen cause problems in a high performance environment. The last option is port forwarding with iptables but I have no experience of how fast it is. Of course, the ideal solution would be to do this on a dedicated hardware firewall / load balancer but that's not an option at present.

    Read the article

  • Migrating Windows 2008 R2 to Windows 2012 (migrate all FSMO too)

    - by Mauro
    I own 2 server with Windows 2008 R2, both DC. The first one is of course the Primary DC (with all FSMO). What I would like to do is ro dcdemote the 2nd DC, remove it from domain and replace the Windows 2008 r2 with 2012. I will then rejoin this 2nd DC (with the new 2012 server) to domain and dcpromo it (Server Management). After this is a new DC I would like to temporary transfer all the FSMO to this server, while I'm doing the same operation on what is actually the Primary DC. Is this a stupid solution? What I would like to do is a clean installation, I don't want to upgrade directly those systems. Suggestions? Ideas? Thanks, Mauro

    Read the article

  • EM12c Release 4: Database as a Service Enhancements

    - by Adeesh Fulay
    Oracle Enterprise Manager 12.1.0.4 (or simply put EM12c R4) is the latest update to the product. As previous versions, this release provides tons of enhancements and bug fixes, attributing to improved stability and quality. One of the areas that is most exciting and has seen tremendous growth in the last few years is that of Database as a Service. EM12c R4 provides a significant update to Database as a Service. The key themes are: Comprehensive Database Service Catalog (includes single instance, RAC, and Data Guard) Additional Storage Options for Snap Clone (includes support for Database feature CloneDB) Improved Rapid Start Kits Extensible Metering and Chargeback Miscellaneous Enhancements 1. Comprehensive Database Service Catalog Before we get deep into implementation of a service catalog, lets first understand what it is and what benefits it provides. Per ITIL, a service catalog is an exhaustive list of IT services that an organization provides or offers to its employees or customers. Service catalogs have been widely popular in the space of cloud computing, primarily as the medium to provide standardized and pre-approved service definitions. There is already some good collateral out there that talks about Oracle database service catalogs. The two whitepapers i recommend reading are: Service Catalogs: Defining Standardized Database Service High Availability Best Practices for Database Consolidation: The Foundation for Database as a Service [Oracle MAA] EM12c comes with an out-of-the-box service catalog and self service portal since release 1. For the customers, it provides the following benefits: Present a collection of standardized database service definitions, Define standardized pools of hardware and software for provisioning, Role based access to cater to different class of users, Automated procedures to provision the predefined database definitions, Setup chargeback plans based on service tiers and database configuration sizes, etc Starting Release 4, the scope of services offered via the service catalog has been expanded to include databases with varying levels of availability - Single Instance (SI) or Real Application Clusters (RAC) databases with multiple data guard based standby databases. Some salient points of the data guard integration: Standby pools can now be defined across different datacenters or within the same datacenter as the primary (this helps in modelling the concept of near and far DR sites) The standby databases can be single instance, RAC, or RAC One Node databases Multiple standby databases can be provisioned, where the maximum limit is determined by the version of database software The standby databases can be in either mount or read only (requires active data guard option) mode All database versions 10g to 12c supported (as certified with EM 12c) All 3 protection modes can be used - Maximum availability, performance, security Log apply can be set to sync or async along with the required apply lag The different service levels or service tiers are popularly represented using metals - Platinum, Gold, Silver, Bronze, and so on. The Oracle MAA whitepaper (referenced above) calls out the various service tiers as defined by Oracle's best practices, but customers can choose any logical combinations from the table below:  Primary  Standby [1 or more]  EM 12cR4  SI  -  SI  SI  RAC -  RAC SI  RAC RAC  RON -  RON RON where RON = RAC One Node is supported via custom post-scripts in the service template A sample service catalog would look like the image below. Here we have defined 4 service levels, which have been deployed across 2 data centers, and have 3 standardized sizes. Again, it is important to note that this is just an example to get the creative juices flowing. I imagine each customer would come up with their own catalog based on the application requirements, their RTO/RPO goals, and the product licenses they own. In the screenwatch titled 'Build Service Catalog using EM12c DBaaS', I walk through the complete steps required to setup this sample service catalog in EM12c. 2. Additional Storage Options for Snap Clone In my previous blog posts, i have described the snap clone feature in detail. Essentially, it provides a storage agnostic, self service, rapid, and space efficient approach to solving your data cloning problems. The net benefit is that you get incredible amounts of storage savings (on average 90%) all while cloning databases in a matter of minutes. Space and Time, two things enterprises would love to save on. This feature has been designed with the goal of providing data cloning capabilities while protecting your existing investments in server, storage, and software. With this in mind, we have pursued with the dual solution approach of Hardware and Software. In the hardware approach, we connect directly to your storage appliances and perform all low level actions required to rapidly clone your databases. While in the software approach, we use an intermediate software layer to talk to any storage vendor or any storage configuration to perform the same low level actions. Thus delivering the benefits of database thin cloning, without requiring you to drastically changing the infrastructure or IT's operating style. In release 4, we expand the scope of options supported by snap clone with the addition of database CloneDB. While CloneDB is not a new feature, it was first introduced in 11.2.0.2 patchset, it has over the years become more stable and mature. CloneDB leverages a combination of Direct NFS (or dNFS) feature of the database, RMAN image copies, sparse files, and copy-on-write technology to create thin clones of databases from existing backups in a matter of minutes. It essentially has all the traits that we want to present to our customers via the snap clone feature. For more information on cloneDB, i highly recommend reading the following sources: Blog by Tim Hall: Direct NFS (DNFS) CloneDB in Oracle Database 11g Release 2 Oracle OpenWorld Presentation by Cern: Efficient Database Cloning using Direct NFS and CloneDB The advantages of the new CloneDB integration with EM12c Snap Clone are: Space and time savings Ease of setup - no additional software is required other than the Oracle database binary Works on all platforms Reduce the dependence on storage administrators Cloning process fully orchestrated by EM12c, and delivered to developers/DBAs/QA Testers via the self service portal Uses dNFS to delivers better performance, availability, and scalability over kernel NFS Complete lifecycle of the clones managed by EM12c - performance, configuration, etc 3. Improved Rapid Start Kits DBaaS deployments tend to be complex and its setup requires a series of steps. These steps are typically performed across different users and different UIs. The Rapid Start Kit provides a single command solution to setup Database as a Service (DBaaS) and Pluggable Database as a Service (PDBaaS). One command creates all the Cloud artifacts like Roles, Administrators, Credentials, Database Profiles, PaaS Infrastructure Zone, Database Pools and Service Templates. Once the Rapid Start Kit has been successfully executed, requests can be made to provision databases and PDBs from the self service portal. Rapid start kit can create complex topologies involving multiple zones, pools and service templates. It also supports standby databases and use of RMAN image backups. The Rapid Start Kit in reality is a simple emcli script which takes a bunch of xml files as input and executes the complete automation in a matter of seconds. On a full rack Exadata, it took only 40 seconds to setup PDBaaS end-to-end. This kit works for both Oracle's engineered systems like Exadata, SuperCluster, etc and also on commodity hardware. One can draw parallel to the Exadata One Command script, which again takes a bunch of inputs from the administrators and then runs a simple script that configures everything from network to provisioning the DB software. Steps to use the kit: The kit can be found under the SSA plug-in directory on the OMS: EM_BASE/oracle/MW/plugins/oracle.sysman.ssa.oms.plugin_12.1.0.8.0/dbaas/setup It can be run from this default location or from any server which has emcli client installed For most scenarios, you would use the script dbaas/setup/database_cloud_setup.py For Exadata, special integration is provided to reduce the number of inputs even further. The script to use for this scenario would be dbaas/setup/exadata_cloud_setup.py The database_cloud_setup.py script takes two inputs: Cloud boundary xml: This file defines the cloud topology in terms of the zones and pools along with host names, oracle home locations or container database names that would be used as infrastructure for provisioning database services. This file is optional in case of Exadata, as the boundary is well know via the Exadata system target available in EM. Input xml: This file captures inputs for users, roles, profiles, service templates, etc. Essentially, all inputs required to define the DB services and other settings of the self service portal. Once all the xml files have been prepared, invoke the script as follows for PDBaaS: emcli @database_cloud_setup.py -pdbaas -cloud_boundary=/tmp/my_boundary.xml -cloud_input=/tmp/pdb_inputs.xml          The script will prompt for passwords a few times for key users like sysman, cloud admin, SSA admin, etc. Once complete, you can simply log into EM as the self service user and request for databases from the portal. More information available in the Rapid Start Kit chapter in Cloud Administration Guide.  4. Extensible Metering and Chargeback  Last but not the least, Metering and Chargeback in release 4 has been made extensible in all possible regards. The new extensibility features allow customer, partners, system integrators, etc to : Extend chargeback to any target type managed in EM Promote any metric in EM as a chargeback entity Extend list of charge items via metric or configuration extensions Model abstract entities like no. of backup requests, job executions, support requests, etc  A slew of emcli verbs have also been added that allows administrators to create, edit, delete, import/export charge plans, and assign cost centers all via the command line. More information available in the Chargeback API chapter in Cloud Administration Guide. 5. Miscellaneous Enhancements There are other miscellaneous, yet important, enhancements that are worth a mention. These mostly have been asked by customers like you. These are: Custom naming of DB Services Self service users can provide custom names for DB SID, DB service, schemas, and tablespaces Every custom name is validated for uniqueness in EM 'Create like' of Service Templates Now creating variants of a service template is only a click away. This would be vital when you publish service templates to represent different database sizes or service levels. Profile viewer View the details of a profile like datafile, control files, snapshot ids, export/import files, etc prior to its selection in the service template Cleanup automation - for failed and successful requests Single emcli command to cleanup all remnant artifacts of a failed request Cleanup can be performed on a per request bases or by the entire pool As an extension, you can also delete successful requests Improved delete user workflow Allows administrators to reassign cloud resources to another user or delete all of them Support for multiple tablespaces for schema as a service In addition to multiple schemas, user can also specify multiple tablespaces per request I hope this was a good introduction to the new Database as a Service enhancements in EM12c R4. I encourage you to explore many of these new and existing features and give us feedback. Good luck! References: Cloud Management Page on OTN Cloud Administration Guide [Documentation] -- Adeesh Fulay (@adeeshf)

    Read the article

  • Black Screen after Resume from Sleep (Kubuntu)

    - by user20271
    I know there is a lot of other posts like this, but I have been looking for hours and I still haven't found any solution. I have recently installed Kubuntu Linux along side my Windows 7, the sleep on my Win7 works fine and resumes like normal. When I am loaded into Kubuntu, and I put my laptop to sleep, it goes into sleep as normal. When I go to RESUME from the sleep, the screen stays solid black, it doesn't light up, no blinking curser or anything. The Wi-Fi light is 'off' (orange) and I cannot turn it on. The Caps lock and the Num lock lights on the keyboard blink slowly. I hear something on the inside of the computer start to spin. I am not very experienced with Kubuntu/Linux, but I do know a bunch of computer terminology, I am still far from an expert though. I have about 300GB designated to my Win7 stuff, and another partition with about 100GB for my Kubuntu Linux. My computers specs are as follows: Windows 7 64-bit I have the most recent version of Kubuntu because I just downloaded it a few days ago and updated it yesterday. AMD Athlon Duel-Core processor 4GB of RAM And it is a HP G61 Laptop

    Read the article

  • 'Binary XML' for game data?

    - by bluescrn
    I'm working on a level editing tool that saves its data as XML. This is ideal during development, as it's painless to make small changes to the data format, and it works nicely with tree-like data. The downside, though, is that the XML files are rather bloated, mostly due to duplication of tag and attribute names. Also due to numeric data taking significantly more space than using native datatypes. A small level could easily end up as 1Mb+. I want to get these sizes down significantly, especially if the system is to be used for a game on the iPhone or other devices with relatively limited memory. The optimal solution, for memory and performance, would be to convert the XML to a binary level format. But I don't want to do this. I want to keep the format fairly flexible. XML makes it very easy to add new attributes to objects, and give them a default value if an old version of the data is loaded. So I want to keep with the hierarchy of nodes, with attributes as name-value pairs. But I need to store this in a more compact format - to remove the massive duplication of tag/attribute names. Maybe also to give attributes native types, so, for example floating-point data is stored as 4 bytes per float, not as a text string. Google/Wikipedia reveal that 'binary XML' is hardly a new problem - it's been solved a number of times already. Has anyone here got experience with any of the existing systems/standards? - are any ideal for games use - with a free, lightweight and cross-platform parser/loader library (C/C++) available? Or should I reinvent this wheel myself? Or am I better off forgetting the ideal, and just compressing my raw .xml data (it should pack well with zip-like compression), and just taking the memory/performance hit on-load?

    Read the article

  • Gathering all data in single iteration vs using functions for readable code

    - by user828584
    Say I have an array of runners with which I need to find the tallest runner, the fastest runner, and the lightest runner. It seems like the most readable solution would be: runners = getRunners(); tallestRunner = getTallestRunner(runners); fastestRunner = getFastestRunner(runners); lightestRunner = getLightestRunner(runners); ..where each function iterates over the runners and keeps track of the largest height, greatest speed, and lowest weight. Iterating over the array three times, however, doesn't seem like a very good idea. It would instead be better to do: int greatestHeght, greatestSpeed, leastWeight; Runner tallestRunner, fastestRunner, lightestRunner; for(runner in runners){ if(runner.height > greatestHeight) { greatestHeight = runner.height; tallestRunner = runner; } if(runner.speed > ... } While this isn't too unreadable, it can get messy when there is more logic for each piece of information being extracted in the iteration. What's the middle ground here? How can I use only a single iteration while still keeping the code divided into logical units?

    Read the article

  • Discovering path through unknown territory

    - by TravisG
    Let's say all the AI knows about it's surroundings is a pixel-map that it has which clearly shows walkable terrain and obstacles. I want the AI to be able to traverse this terrain until it finds an exit point. There are some restrictions: There is always a way to the exit in the entire map that the AI walks around in, but there may be dead ends. The path to the exit is always pretty random, meaning that if you stand at crossroads, nothing indicates which direction would be the right one to go. It doesn't matter if the AI reaches a dead end, but it has to be able walk back out of it to a previously not inspected location and continue its search there. Initially, the AI starts out knowing only the starting area of the whole map. As it walks around, new points will be added to the pixel-map as the AI corresponding to the AIs range of sight (think of it like the AI is clearing the fog of war) The problem is in 2D space. All I have is the pixel map. There are no paths in the pixel map which are "too narrow". The AI fits through everything. It shouldn't be a brute force solution. E.g. it would be possible to simply find a path to each pixel in the pixel map that is yet undiscovered (with A*, for example), which will lead to the AI discovering new pixels. This could be repeated until the end is reached. The path doesn't have to be the shortest path (this is impossible without knowing the entire map beforehand), but when movements within the visible area are calculated, the shortest and from a human standpoint most logical path should be taken (e.g. if you can see a way out of your room into a hallway, you would obviously go there instead of exploring the corner of your current room). What kind of approaches to solve this problem are there?

    Read the article

  • iis 7.5windows 7 error 500.19 error code=0x800700b7

    - by nikhiljoshi
    hi friends i have been trying to resiolve this issue can you guys pls help me for same i am using windows 7 and vs2008 +iis7.5 i have my project stucked pls reply here is what error says Error Summary HTTP Error 500.19 - Internal Server Error The requested page cannot be accessed because the related configuration data for the page is invalid. Detailed Error Information Module IIS Web Core Notification BeginRequest Handler Not yet determined Error Code 0x800700b7 Config Error There is a duplicate 'system.web.extensions/scripting/scriptResourceHandler' section defined Config File \?\C:\inetpub\wwwroot\test23\web.config Requested URL http://localhost:80/test23 Physical Path C:\inetpub\wwwroot\test23 Logon Method Not yet determined Logon User Not yet determined Config Source 15: 16: 17: i have tried to do the solution given in this site of microsoft http://support.microsoft.com/kb/942055

    Read the article

  • Does anyone using godaddy shared windows webhosting, have multiple websites on it and faced this pro

    - by Amr ElGarhy
    I have multiple website on the same shared hosting on godaddy server, its Deluxe Hosting - Windows plan. I asked before a question about this: http://serverfault.com/questions/13906/how-to-fix-subfolders-iis7-functionality But i feel that no one is facing this problem except me, so i want to know what i am doing wrong or if someone had the same problem please tell me. all my website are in subfolders from the root folder, the problem that all links are showing like this: www.example.com/example/...., www.anotherwebsite.com/anotherwebsite/.... such as this http://amrelgarhy.com/ Means the folder name is showing in the URL, i did all what i can and discussed with godaddy a lot, but they always tell that its a IIS7 problem. Did you face this problem before or know a solution for?

    Read the article

  • Random timeouts in IIS7

    - by Cor-Paul
    Hi, I have a weird problem which I think is caused by my IIS7 installation on Vista 64 bit. I have a bunch of AJAX calls and JS dynamic file loads (~30) in a local application, and I get random timeouts (or so it seems) in my browser. In Chrome it looks like the page just stops loading (no HDD activity), in Firefox/Firebug I can see that some of the files are being loaded but they never actually finish. When I reload the page the same occurs, but for (random) other files that must be loaded. When I try to load one of those JS files concurrently (so during the timeout in FF) in another browser, the file loads there. So I am pretty sure the request can be handled by IIS. I am thinking about a limit or so on simultaneous requests from the same browser which is not working correctly, but I am pretty clueless on how to solve this. Does anyone recognize this problem and know a solution? Thanks!

    Read the article

  • How to resolve SSPI context error without changing Service Account from MSSQL

    - by kockiren
    There is a issue while connecting from new Windows 8.1 Clients to SQL Server 2008 running on Windows Server 2008 R2. The SQL Service running under account Domain\mssqlservice on a machine thats works fine I get this output from setspn -l domain\mssqlservice C:\>setspn -l domain\mssqlservice Registrierte Dienstprinzipalnamen (SPN) für CN=MSSQLService,CN=Users,DC=domain, DC=local,DC=tld: MSSQLSvc/mssql.domain.local.tld:1433 MSSQLSvc/mssql.domain.local.tld MSSQLSERVER/mssql.domain.local.tld:1433 On a windows 8.1 machine that don't work I get this output: C:\>setspn -l domain\msssqlservice FindDomainForAccount: Fehler beim Aufrufen von DsGetDcNameWithAccountW mit dem R ückgabewert 0x0000054B. Konto kockiren wurde nicht gefunden. On this Post I found a solution but, I can't change the Service Account who runs the SQL Service. Some application need this service delegation. But how I can realize that it works on my Windows 8.1 Clients?

    Read the article

  • asp.net mvc vs angular.js model binding

    - by aw04
    So I've noticed a trend lately of .net web developers using angular.js on the client side of applications and I've become more curious as I play around with angular and compare it to how I would do things in asp.net mvc. I'll give a quick example of what really got me thinking. I recently came across a situation at work (I work in a .net environment) where I needed to create a table bound to a collection of objects that had the ability to add and remove rows/items from the collection. I had an add button that created a new object and appended a row to the end of the table, and a remove button in each row to remove a particular object/row. Using asp.net mvc, I first found myself making an ajax call to the server for each operation, updating the server side model, and refreshing part of the page to show the result in the table. This worked but I didn't really like the idea of calling the server to update the model each time, so I tried to come up with a solution to do this on the client side. It turned out to be quite a task, as I had to generate the html on add with validation and all and the correct indexing for the model binding to work. It got worse on remove, as I ended up with a crazy string replace function to recreate the indexes on each item to satisfy the binding requirements (if an item other than the last is removed, the indexes are no longer correct). Now out of curiosity, I tried to recreate this at home in angular (which I had no experience with) and it took me all of about 10 minutes with simple functions to add and remove items from the client side model. This is just one example, but it seems to me that I'm able to achieve the same results with far fewer calls to the server in angular because of the fact that it binds to a client side model. So my question is, is this a distinct advantage of using a javascript mvc framework or am I somehow under utilizing the power of asp.net mvc and am I right in thinking that these operations should be done on the client and have no business requiring calls to the server?

    Read the article

  • Make ZoneAlarm stop pausing my C programs when I run them

    - by rMaero
    I'm using Dev-C++ to develop some console apps to study. When my program tries to run system("PAUSE"); ZA stops it and asks me to allow or deny it. I check the "always" box but it seems that every time I compile it, it generates a new exe file so every time I run it ZA pops up. Of course the simplest solution is to disable it or deal with it :-P but I'm not eager for any of both. Any suggestions? thanks in advance!

    Read the article

  • Resize primary partition

    - by telebog
    I have a hdd with the folowing partition table 12Gb Primary Partition (ntfs) 140Gb Extended Partition (ntfs) I want to install windows 7 and I need more space for the Primary Partition. The problem is that when I resize partitons I obtain: 12Gb Primary Partition (ntfs) 110Gb Extended Partition (ntfs) 30Gb Free Space So I can't allocate the free space to primary partition because the free space is at the end of the disk. Is there a solution to extend the primary partition as: 42Gb Primary Partition (ntfs) 110Gb Extended Partition (ntfs) without repartitioning the entire disk? I used partition magic, gparted-live-0.4.6-4 and others with no success. With the Disk Management from Vista I manage to extend primary partition, but made my partitions dinamic.

    Read the article

  • OWB/ODI Users: Last Chance to Submit and Vote On Sessions for OpenWorld 2010

    - by antonio romero
    Now is the last chance for OWB and ODI users to propose new ETL/DW/DI sessions for OpenWorld! Oracle OpenWorld 2010 "Suggest a Session" lets members of the Oracle Mix community submit and vote on papers/talks for OpenWorld. The most popular session proposals will be included in the conference program. One promising OWB-related topic has already been submitted: Case Study: Real-Time Data Warehousing and Fraud Detection with Oracle 11gR2 Dr. Holger Friedrich and consultants from sumIT AG in Switzerland built a real-time data warehouse and accompanying BI system for real-time online fraud detection with very limited resources and a short schedule. His presentation will cover: How sumIT AG efficiently loads complex data feeds in real time in Oracle 11gR2 using, among others, Advanced Queues and XML DB How they lowered costs and sped up development, by leveraging the DBs development features including Oracle Warehouse Builder How they delivered a production-ready solution in a few short months using only three part-time developers Come vote for this proposal, on Oracle Mix: https://mix.oracle.com/oow10/proposals/10566-case-study-real-time-data-warehousing-and-fraud-detection-with-oracle-11gr2  I have already invited members of the OWB/ODI Linkedin group (with over 1400 members) to come vote on topics like this one and propose their own. If enough of us vote on a few topics, we are sure to get some on the agenda!  And if you have your own topics, using the Suggest-a-Session instructions here: http://wiki.oracle.com/page/Oracle+OpenWorld+2010+Suggest-a-Session If you propose a topic, don't forget to come to Linkedin and promote it! I have already sent the members of the Linkedin group an email announcement about this, and I will send another in a week, with links to all topics submitted. Thanks, all!

    Read the article

< Previous Page | 466 467 468 469 470 471 472 473 474 475 476 477  | Next Page >