Search Results

Search found 23021 results on 921 pages for 'process monitoring'.

Page 542/921 | < Previous Page | 538 539 540 541 542 543 544 545 546 547 548 549  | Next Page >

  • PCI compliance when using third-party processing

    - by Moses
    My company is outsourcing the development of our new e-commerce site to a third party web development company. The way they set up our site to handle transactions is by having the user enter the necessary payment info, then passing that data to a third party merchant that processes the payment, then completing the transaction if everything is good. When the issue of PCI/DSS compliance was raised, they said: You wont need PCI certification because the clients browser will send the sensitive information directly to the third party merchant when the transaction is processed. However, the process will be transparent to the user because all interface and displays are controlled by us. The only server required to be compliant is the third party merchant's because no sensitive card data ever touches your server or web app. Even though I very much so trust and respect the knowledge of our web developers, what they are saying is raising some serious red flags for me. The way the site is described, I am sure we will not be using a hosted payment page like PayPal or Google Checkout offers (how could we maintain control over UI if we were?) And while my knowledge of e-commerce is laughable at best, it seems like the only other option for us would be to use XML direct to communicate with our third party merchant for processing. My two questions are as follows: Based off everything you've read, is "XML Direct" the only option they could conceivably be using, or is there another method I don't know of which they could be implementing? Most importantly, is it true our site does not need PCI certification? As I understand it, using the XML direct method means that we do have to be PCI/DSS certified, and the only way around getting certified is through a payment hosted page (i.e. PayPal).

    Read the article

  • apache2.2 response problem

    - by ffffff
    We have some heavy file (about 100k). sometime response time is very slow (100s).. why ? I'm a poor server administrator. so could you help me? information is following: httpd one process has 10MB RAM is 4G <IfModule mpm_prefork_module> StartServers 300 MinSpareServers 10 MaxSpareServers 300 ServerLimit 1000 MaxClients 1000 MaxRequestsPerChild 9999 </IfModule> What is the best configurations? MaxClients are too many?

    Read the article

  • Solving Big Problems with Oracle R Enterprise, Part II

    - by dbayard
    Part II – Solving Big Problems with Oracle R Enterprise In the first post in this series (see https://blogs.oracle.com/R/entry/solving_big_problems_with_oracle), we showed how you can use R to perform historical rate of return calculations against investment data sourced from a spreadsheet.  We demonstrated the calculations against sample data for a small set of accounts.  While this worked fine, in the real-world the problem is much bigger because the amount of data is much bigger.  So much bigger that our approach in the previous post won’t scale to meet the real-world needs. From our previous post, here are the challenges we need to conquer: The actual data that needs to be used lives in a database, not in a spreadsheet The actual data is much, much bigger- too big to fit into the normal R memory space and too big to want to move across the network The overall process needs to run fast- much faster than a single processor The actual data needs to be kept secured- another reason to not want to move it from the database and across the network And the process of calculating the IRR needs to be integrated together with other database ETL activities, so that IRR’s can be calculated as part of the data warehouse refresh processes In this post, we will show how we moved from sample data environment to working with full-scale data.  This post is based on actual work we did for a financial services customer during a recent proof-of-concept. Getting started with the Database At this point, we have some sample data and our IRR function.  We were at a similar point in our customer proof-of-concept exercise- we had sample data but we did not have the full customer data yet.  So our database was empty.  But, this was easily rectified by leveraging the transparency features of Oracle R Enterprise (see https://blogs.oracle.com/R/entry/analyzing_big_data_using_the).  The following code shows how we took our sample data SimpleMWRRData and easily turned it into a new Oracle database table called IRR_DATA via ore.create().  The code also shows how we can access the database table IRR_DATA as if it was a normal R data.frame named IRR_DATA. If we go to sql*plus, we can also check out our new IRR_DATA table: At this point, we now have our sample data loaded in the database as a normal Oracle table called IRR_DATA.  So, we now proceeded to test our R function working with database data. As our first test, we retrieved the data from a single account from the IRR_DATA table, pull it into local R memory, then call our IRR function.  This worked.  No SQL coding required! Going from Crawling to Walking Now that we have shown using our R code with database-resident data for a single account, we wanted to experiment with doing this for multiple accounts.  In other words, we wanted to implement the split-apply-combine technique we discussed in our first post in this series.  Fortunately, Oracle R Enterprise provides a very scalable way to do this with a function called ore.groupApply().  You can read more about ore.groupApply() here: https://blogs.oracle.com/R/entry/analyzing_big_data_using_the1 Here is an example of how we ask ORE to take our IRR_DATA table in the database, split it by the ACCOUNT column, apply a function that calls our SimpleMWRR() calculation, and then combine the results. (If you are following along at home, be sure to have installed our myIRR package on your database server via  “R CMD INSTALL myIRR”). The interesting thing about ore.groupApply is that the calculation is not actually performed in my desktop R environment from which I am running.  What actually happens is that ore.groupApply uses the Oracle database to perform the work.  And the Oracle database is what actually splits the IRR_DATA table by ACCOUNT.  Then the Oracle database takes the data for each account and sends it to an embedded R engine running on the database server to apply our R function.  Then the Oracle database combines all the individual results from the calls to the R function. This is significant because now the embedded R engine only needs to deal with the data for a single account at a time.  Regardless of whether we have 20 accounts or 1 million accounts or more, the R engine that performs the calculation does not care.  Given that normal R has a finite amount of memory to hold data, the ore.groupApply approach overcomes the R memory scalability problem since we only need to fit the data from a single account in R memory (not all of the data for all of the accounts). Additionally, the IRR_DATA does not need to be sent from the database to my desktop R program.  Even though I am invoking ore.groupApply from my desktop R program, because the actual SimpleMWRR calculation is run by the embedded R engine on the database server, the IRR_DATA does not need to leave the database server- this is both a performance benefit because network transmission of large amounts of data take time and a security benefit because it is harder to protect private data once you start shipping around your intranet. Another benefit, which we will discuss in a few paragraphs, is the ability to leverage Oracle database parallelism to run these calculations for dozens of accounts at once. From Walking to Running ore.groupApply is rather nice, but it still has the drawback that I run this from a desktop R instance.  This is not ideal for integrating into typical operational processes like nightly data warehouse refreshes or monthly statement generation.  But, this is not an issue for ORE.  Oracle R Enterprise lets us run this from the database using regular SQL, which is easily integrated into standard operations.  That is extremely exciting and the way we actually did these calculations in the customer proof. As part of Oracle R Enterprise, it provides a SQL equivalent to ore.groupApply which it refers to as “rqGroupEval”.  To use rqGroupEval via SQL, there is a bit of simple setup needed.  Basically, the Oracle Database needs to know the structure of the input table and the grouping column, which we are able to define using the database’s pipeline table function mechanisms. Here is the setup script: At this point, our initial setup of rqGroupEval is done for the IRR_DATA table.  The next step is to define our R function to the database.  We do that via a call to ORE’s rqScriptCreate. Now we can test it.  The SQL you use to run rqGroupEval uses the Oracle database pipeline table function syntax.  The first argument to irr_dataGroupEval is a cursor defining our input.  You can add additional where clauses and subqueries to this cursor as appropriate.  The second argument is any additional inputs to the R function.  The third argument is the text of a dummy select statement.  The dummy select statement is used by the database to identify the columns and datatypes to expect the R function to return.  The fourth argument is the column of the input table to split/group by.  The final argument is the name of the R function as you defined it when you called rqScriptCreate(). The Real-World Results In our real customer proof-of-concept, we had more sophisticated calculation requirements than shown in this simplified blog example.  For instance, we had to perform the rate of return calculations for 5 separate time periods, so the R code was enhanced to do so.  In addition, some accounts needed a time-weighted rate of return to be calculated, so we extended our approach and added an R function to do that.  And finally, there were also a few more real-world data irregularities that we needed to account for, so we added logic to our R functions to deal with those exceptions.  For the full-scale customer test, we loaded the customer data onto a Half-Rack Exadata X2-2 Database Machine.  As our half-rack had 48 physical cores (and 96 threads if you consider hyperthreading), we wanted to take advantage of that CPU horsepower to speed up our calculations.  To do so with ORE, it is as simple as leveraging the Oracle Database Parallel Query features.  Let’s look at the SQL used in the customer proof: Notice that we use a parallel hint on the cursor that is the input to our rqGroupEval function.  That is all we need to do to enable Oracle to use parallel R engines. Here are a few screenshots of what this SQL looked like in the Real-Time SQL Monitor when we ran this during the proof of concept (hint: you might need to right-click on these images to be able to view the images full-screen to see the entire image): From the above, you can notice a few things (numbers 1 thru 5 below correspond with highlighted numbers on the images above.  You may need to right click on the above images and view the images full-screen to see the entire image): The SQL completed in 110 seconds (1.8minutes) We calculated rate of returns for 5 time periods for each of 911k accounts (the number of actual rows returned by the IRRSTAGEGROUPEVAL operation) We accessed 103m rows of detailed cash flow/market value data (the number of actual rows returned by the IRR_STAGE2 operation) We ran with 72 degrees of parallelism spread across 4 database servers Most of our 110seconds was spent in the “External Procedure call” event On average, we performed 8,200 executions of our R function per second (110s/911k accounts) On average, each execution was passed 110 rows of data (103m detail rows/911k accounts) On average, we did 41,000 single time period rate of return calculations per second (each of the 8,200 executions of our R function did rate of return calculations for 5 time periods) On average, we processed over 900,000 rows of database data in R per second (103m detail rows/110s) R + Oracle R Enterprise: Best of R + Best of Oracle Database This blog post series started by describing a real customer problem: how to perform a lot of calculations on a lot of data in a short period of time.  While standard R proved to be a very good fit for writing the necessary calculations, the challenge of working with a lot of data in a short period of time remained. This blog post series showed how Oracle R Enterprise enables R to be used in conjunction with the Oracle Database to overcome the data volume and performance issues (as well as simplifying the operations and security issues).  It also showed that we could calculate 5 time periods of rate of returns for almost a million individual accounts in less than 2 minutes. In a future post, we will take the same R function and show how Oracle R Connector for Hadoop can be used in the Hadoop world.  In that next post, instead of having our data in an Oracle database, our data will live in Hadoop and we will how to use the Oracle R Connector for Hadoop and other Oracle Big Data Connectors to move data between Hadoop, R, and the Oracle Database easily.

    Read the article

  • bluetooth headset can connect, but not visible in pulse audio

    - by Kim Marivoet
    I have a plantronics bluetooth headset, and until yesterday I could use it without any problem. However, today it suddenly stopped working (maybe related to the last software update I did). I can still connect/disconnect my headset, but it doesn't show up in pulse audio anymore. I read through various posts that describes kind of the same problem, but none of the suggested solutions worked. I get following error in the syslog: Oct 13 16:49:57 desktop bluetoothd[1040]: Endpoint registered: sender=:1.34 path=/MediaEndpoint/HFPAG Oct 13 16:49:57 desktop bluetoothd[1040]: Endpoint registered: sender=:1.34 path=/MediaEndpoint/A2DPSource Oct 13 16:49:57 desktop bluetoothd[1040]: Endpoint registered: sender=:1.34 path=/MediaEndpoint/A2DPSink Oct 13 16:50:09 desktop kernel: [ 17.340943] input: 48:C1:AC:08:FE:8F as /devices/virtual/input/input14 Oct 13 16:50:09 desktop bluetoothd[1040]: /org/bluez/1040/hci0/dev_48_C1_AC_08_FE_8F/fd0: fd(36) ready Oct 13 16:50:09 desktop rtkit-daemon[1894]: Successfully made thread 2213 of process 1892 (n/a) owned by '1000' RT at priority 5. Oct 13 16:50:09 desktop rtkit-daemon[1894]: Supervising 5 threads of 1 processes of 1 users. Oct 13 16:50:10 desktop bluetoothd[1040]: Badly formated or unrecognized command: AT+XEVENT=USER-AGENT,COM.PLANTRONICS,PLT_VOYAGERPRO,0109,27.90,FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF Oct 13 16:50:10 desktop bluetoothd[1040]: Audio connection got disconnected Any help would be much appreciated. I'm using Ubuntu 12.04. Thanks, Kim

    Read the article

  • Networking not working in Windows 7 - "The account specified for this service is different from the account specified for other services"

    - by tog22
    I have a homebuilt computer with a GA-Z68MA-D2H-B3 motherboard with a Realtek RTL8111E LAN chip. Ethernet was working fine in Windows 7 until I reinstalled this OS, but now it's stopped, while still working in other OSes. I've tried reinstalling drivers from both the Gigabyte and Realtek sites to no avail; I've also plugged in and installed an Asus USB-N13 wifi dongle and this doesn't work oddly. How can I diagnose and fix this issue? In 'Network and Sharing Center' says "The account specified for this service is different from the account specified for other services running in the same process" under the heading 'Unknown'. Following the advice at http://www.sevenforums.com/network-sharing/130159-dependency-service-group-failed-start.html#post1122627 I've gone to 'Control Panel= Admin Tools= Services' and ensured the services listed at that URL are started (one - IIRC 'Come+ Event System' refused to start as user 'Local services' so I've had to set it to log on with the 'Local system account'... I suspect this may be part of the problem).

    Read the article

  • If two separate PATH directories contain a same-named executable, how does Windows choose?

    - by Coldblackice
    I'm in the process of upgrading PEAR (PHP) on my system. The upgrade script is encouraging me to add "..\PHP\PEAR" to my PATH so that I can use "pear.bat". However, I already am able to use pear.bat. Looking in my PATH, I see that I don't have any PEAR directories, only my PHP directory. Opening my PHP directory, I see that there's a "pear.bat" in the base. But there's also a pear.bat in the PEAR subfolder of PHP. I'm wondering if I borked a PEAR install. I digress. So if I leave ..\PHP in my path, but also add ..\PHP\PEAR -- both of which have a "pear.bat" in them -- which one will Windows "choose"? How does Windows decide?

    Read the article

  • xDebug on Zend Server CE under Windows XP

    - by Hippyjim
    I have Zend Server installed on my Windows XP development machine, installed when I was naive and didn't know that Eclipse was going to become so suck so badly for PHP development. I've made the upgrade to Netbeans, but for debugging they only support xDebug. To be fair I've never used "proper" debuggers before, but other folks have raved about them so I thought I'd give it a try. I followed some directions on the Zend forum about how to install xDebug on Zend server, disabling Zend Debugger in the process. The xDebug "custom installation instructions" wizard tells me that my PHP was compiled with an unsupported compiler (MS VC8), and won't let me download anything. I tried a couple of the other xDebug binaries, but they just refused to load. So I'm left without a debugger option. Does anyone know how I can change the compiler of the php version I have installed so I can use a debugger in Netbeans? or how else i can get xDebug to install on Zend Server?

    Read the article

  • SOA Community Newsletter November 2012

    - by JuergenKress
    Dear SOA partner community member Too many different product from Oracle, no idea how do they fit together? Get a copy of the Oracle catalog, an excellent overview of the Oracle middleware portfolio. BPM is a key solution to this portfolio. To position BPM to your customers you can find many use case ideas in the paper BPM 11g Patterns and industry specific value propositions for Financial Services & Insurance & Retail. Many more Process Accelerators (11.1.1.6.2) have become available. It is an excellent demo and starting point for BPM projects. Our SOA Suite team published the most important OOW presentation at the OTN website. The Oracle SOA proactive support team is running a series of blog posts about SOA and JMS Introductory. To become an expert in SOA, Bob highlighted the latest list of SOA books. For OSB projects we recommend the EAIESB OSB poster. Thanks to all the experts who contributed and shared their SOA & BPM knowledge this month again. Please feel free to send us the link to your blog post via twitter @soacommunity: Undeploy multiple SOA composites with WLST or ANT by Danilo Schmiedel Fault Handling Slides and Q&A by Vennester Installing Oracle Event Processing 11g by Antoney Reynolds Expanding the Oracle Enterprise Repository with functional documentation by Marc Kuijpers Build Mobile App for E-Business Suite Using SOA Suite and ADF Mobile By Michelle Kimihira A brief note for customers running SOA Suite on AIX platforms By Christian ACM - Adaptive Case Management by Peter Paul BPM 11g - Dynamic Task Assignment with Multi-level Organization Units By Mark Foster Oracle Real User Experience Insight: Oracle's Approach to User Experience Hope to see you at the Middleware Day at UK Oracle User Group Conference 2012 in Birmingham. Jürgen Kress Oracle SOA & BPM Partner Adoption EMEA To read the newsletter please visit http://tinyurl.com/soanewsNovember2012 (OPN Account required) To become a member of the SOA Partner Community please register at http://www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SOA Community newsletter,SOA Community,Oracle SOA,Oracle BPM,BPM Community,OPN,Jürgen Kress

    Read the article

  • SOA Community Newsletter November 2012

    - by JuergenKress
    Dear SOA partner community member Too many different product from Oracle, no idea how do they fit together? Get a copy of the Oracle catalog, an excellent overview of the Oracle middleware portfolio. BPM is a key solution to this portfolio. To position BPM to your customers you can find many use case ideas in the paper BPM 11g Patterns and industry specific value propositions for Financial Services & Insurance & Retail. Many more Process Accelerators (11.1.1.6.2) have become available. It is an excellent demo and starting point for BPM projects. Our SOA Suite team published the most important OOW presentation at the OTN website. The Oracle SOA proactive support team is running a series of blog posts about SOA and JMS Introductory. To become an expert in SOA, Bob highlighted the latest list of SOA books. For OSB projects we recommend the EAIESB OSB poster. Thanks to all the experts who contributed and shared their SOA & BPM knowledge this month again. Please feel free to send us the link to your blog post via twitter @soacommunity: Undeploy multiple SOA composites with WLST or ANT by Danilo Schmiedel Fault Handling Slides and Q&A by Vennester Installing Oracle Event Processing 11g by Antoney Reynolds Expanding the Oracle Enterprise Repository with functional documentation by Marc Kuijpers Build Mobile App for E-Business Suite Using SOA Suite and ADF Mobile By Michelle Kimihira A brief note for customers running SOA Suite on AIX platforms By Christian ACM - Adaptive Case Management by Peter Paul BPM 11g - Dynamic Task Assignment with Multi-level Organization Units By Mark Foster Oracle Real User Experience Insight: Oracle's Approach to User Experience Hope to see you at the Middleware Day at UK Oracle User Group Conference 2012 in Birmingham. Jürgen Kress Oracle SOA & BPM Partner Adoption EMEA To read the newsletter please visit http://tinyurl.com/soanewsNovember2012 (OPN Account required) To become a member of the SOA Partner Community please register at http://www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SOA Community newsletter,SOA Community,Oracle SOA,Oracle BPM,BPM Community,OPN,Jürgen Kress

    Read the article

  • How to manage iowait over cifs?

    - by Silvia
    For backup purposes we have Cifs file Server running that contains encrypted containers for backing up the more sensitive data. The container is mounted with cryptsetup and loop as a local filesystem and the rsync is used for backups. Because the Cifs server is not the fastest machine ever built, running the rsync process results in an iowait on the servers running the backup which in turn drives Nagios into an email frenzy. The question is, how do reduce the iowait on the server? Configuring Nagios to not report seems more like a workaround then a solution. Stretching the backups over different time intervals is already done with little effect and spending money is also not an option because apparently, we are talking about a "non-critical system".

    Read the article

  • CPanel - How to stop Apache from running as root user?

    - by ambu
    <?php echo `whoami`; ?> So this is returning 'root' and I don't know how to prevent it. I'm using WebHost Manager / CPanel which is supposed to create multiple users/vhosts and have Apache spawn it's process as that user/group. This isn't happening. If I login to WHM and open the PHP and SuExec Configuration section, my settings are: Default PHP Version (.php files) 5 PHP 5 Handler cgi PHP 4 Handler none Apache suEXEC on What's wrong? How can I get Apache to run as the correct user rather than root?

    Read the article

  • Authentication system brainstorm

    - by gansbrest
    Hi. We got multiple small websites (microsites) and one main high traffic one with big users base. Right now the requirement is to build authentication system which should allow users to loign with the same identity across the network. All website are running on different domains, powered by Drupal 6 CMS and have separate databases (so sharing tables with prefix is not an option + it creates a huge mess in the db). Here is the set of core requirements I came up with: Users should be able to login with the same credentials to all sites within the network User’s data sharing between Main site (storage) and all micro sites within the network Data synchronization across the network when user changes the data (update email or password for example) The login/registration process should be seamless and consistent Register on any of the sites across the network and use that identity to login later on. In the future there might be a need to add openid authentication options. Basically we are looking at something similar stackexchange does, but not sure if they have central users base on not. I was thinking about custom solution which will include 2 parts (modules), one will be stored on the Main site for users data storing and responding to requests from clients. Second part (module) will be placed on each microsite, which is going to send requests to the Master. Some kind of client - server setup. One of the complications I see right away is #3. Data Synhcronization across the network. I just don't want to reinvent the wheel and maybe some work is already done in this direction. Looking forward to your ideas on how to approach this project. EDIT: We use MySQL database

    Read the article

  • Whois status "pending delete" with expiration date in November 2011???

    - by Sylver
    A friend of mine is in the process of being scammed by a domain registrar and I am trying to sort out the mess. However I could use a hand understanding some of the details. He paid for 2 years of domain name registration on 6 november 2009. The whois record reads: Domain ID:XXXXXXXXXX Domain Name:XXXXXXXXX.ORG Created On:06-Nov-2009 09:23:12 UTC Last Updated On:17-Dec-2010 00:15:10 UTC Expiration Date:06-Nov-2011 09:23:12 UTC Sponsoring Registrar:OnlineNIC Inc. (R64-LROR) Status:CLIENT TRANSFER PROHIBITED Status:HOLD Status:PENDING DELETE SCHEDULED FOR RELEASE Registrant ID:ONLC-XXXXXXX-X Registrant Name:My friend's name ... Registrant Email:Old email The registrar charged a renewal fee a week ago and is now asking an extra $150 to "reclaim" the domain name, even though the domain name is apparently still in my friend's name and it looks like there is still another 10 months before the expiry date. The expiration date on the WhoIs record looks right (Nov 2011), so I don't understand why the domain status says "PENDING DELETE SCHEDULED FOR RELEASE". Can someone explain me better what the deal is and explain what I need to do get the domain name transfered to a more honest registrar? I already have a registrar for my own domain names, been using them for 10 years without problems, so I know where to transfer the domain names to, I just don't know how to proceed.

    Read the article

  • NRF Big Show 2011 -- Part 1

    - by David Dorf
    When Apple decided to open retail stores, they came to 360Commerce (now part of Oracle Retail) to help with the secret project. Similarly, when Disney Stores decided to reinvent itself, they also came to us for their POS system. In both cases visiting a store is an experience where sales take a backseat to entertainment, exploration, and engagement This quote from a recent Stores Magazine article says it all: "We compete based on an experience, emotion and immersion like Disney," says Neal Lassila, vice president of global information technology for Disney. "That's opposed to [competing] on price and hawking a doll for $19.99. There is no sales pressure technique." Instead, it's about delivering "a great time." While you're attending the NRF conference in New York next week, you'll definitely want to stop by the new 20,000 square-foot Disney store in Times Square. If you're not attending, you can always check out the videos to get a feel for the stores' vibe. This year we've invited Disney Stores to open a pop-up store within the Oracle Retail booth. There will be lots of items on sale that fit in your suitcase, and there's no better way to demonstrate our POS, including the mobile POS running on an iPod Touch. You should also plan to attend Tuesday morning's super-session The Magic of the Disney Store: An Immersive Retail Experience with Steve Finney. In the case of Apple and Disney, less POS is actually a good thing. In both cases it was important to make the checkout process fast and easy so as not to detract from the overall experience. There will be ample opportunities to see this play out in New York next week, so I hope you take advantage.

    Read the article

  • Oracle Global HR Cloud Implementation Training Can Help Meet Your Business Needs

    - by HCM-Oracle
    By Jim Vonick A key goal for the deployment of your Oracle Global HR Cloud applications is to accelerate the implementation and adoption of your applications, so that your business can start realizing all of the benefits that this rich solution offers.    Implementation team members need to have the skills and knowledge to ensure a smooth, rapid and successful implementation of your applications. During set-up, you want to optimize the configuration to best meet your business needs. In order to do this you need to understand the foundation and configuration options of your applications, so that decisions can be made during set-up that best align with your business.  To that end product level implementation training is recommended for Oracle Global HR Cloud deployments. Training For Implementation Team Members and Consultants Fusion Applications: HCM Security: Learn how to implement security for Oracle Fusion HCM applications by creating and customizing roles. You'll learn how to create security profiles to restrict data access, provision roles to users, create and manage user accounts, and verify security setup. Fusion Applications: HCM Global Human Resources: Learn how to set up your enterprise and workforce structures, how to perform functional tasks, and how to configure security for Global Human Resources data. Fusion Applications: HCM Compensation: Learn how to implement, configure, and use Oracle Fusion Compensation to manage base pay, individual compensation, workforce compensation, and total compensation statements. Fusion Applications: HCM Benefits: This course teaches you to implement, configure and manage Oracle Fusion Benefits, including how to implement benefit plans and programs.  Fusion Applications: HCM Payroll Implementation (US): This course provides implementation training for payroll managers or payroll administrators. Learn how to process payroll to ensure accurate setup results.  Learn More: See all Fusion HCM Training Jim Vonick is a Senior Product Manager with Oracle University focusing on training for Oracle Applications and Industry Solutions.

    Read the article

  • Mailbox move issue from Exchange 2003 to Exchange 2010

    - by Ryan Roussel
    Today while moving mailboxes between Exchange 2003 and Exchange 2010, I hit an issue with a couple of mailboxes.  These mailboxes all popped access denied errors or more exactly: Insufficient Access Rights to perform the operation.   The cause was similar to the mail flow issue in that inheritable permissions were not turned on for the user object in Active Directory.  This also presented it’s own unique problem in that since the initial move request failed because of permissions, it had to be cleared before a new move request could be created. On top of that, the request did not show up in the EMC.  I used the following process to clear the request, assign permission, then create a new request:   1. First you need to know the ExchangeGUID of the mailbox for the remove-moverequest command.  To quickly get the GUID for a mailbox simply run:         2. Next we need to clear out the move request using PowerShell by running: [PS] c:\>Remove-moverequest -moverequestqueue "mailbox database 1030639620" -mailboxguid 8525686f-d4d3-42b7-92f1-46d77ea841a3   3. Then to re-establish inheritable permissions. This can be done by using AD Users and Computers, switching to View Advanced Features, then under the Security tab of the object.  Click Advanced, then check “allow inheritable permissions of parent to propagate to this object”   4. Once the Inheritable permissions are restored, we need to create a new move request: NOTE:  The EMC can also be used to initiate the Move Request once the permissions are corrected. [PS] c:\>New-moverequest –identity jyoung  -baditemlimit 100 -targetdatabase "mailbox database 1030639620"   And that’s it.  The mailbox should move over smoothly with no access denied error.

    Read the article

  • PXE Boot PCLinuxOS ISO

    - by DBNotCooper
    I'm in the process of trying to convert some computers at my local school to be diskless browser stations. We've identified PCLinuxOS as the OS we'd like to use due to it's easy interface for creating custom ISO images (we need WINE and some custom apps installed also, as well as FireFox). I've been having problems figuring out how to get an ISO to boot via PXE. In our network, I only have access to TFTP and HTTP, so I cannot use NFS. The machines all have enough memory (4 gigs) that they could use a ram drive to hold the ISO image, if that helps. Currently I've been looking at GPXE with GRUB/MEMDISK, but I don't know if that's the right solution, or even where a good resource is for setting it up. Searching the web has proved fruitless, as most of the information is either NFS-specific or out of date. The other students and I would appreciate any help! :)

    Read the article

  • PowerShell & SQL Compare

    - by Grant Fritchey
    Just a quick blog post to share a couple of scripts for using PowerShell to call SQL Compare. This is an example from my session at SQL in the City on setting up a sandbox development process. This just runs a compare between a set of scripts and a database and deploys it. set-Location “c:\Program Files (x86)\Red Gate\SQL Compare 10\”; ./sqlcompare /s2:DOJO /db2:MovieManagement_Sandbox /sourcecontrol1 /vu1:grant /vp1:12345 /r1:HEAD /sfx:scripts.xml /sync /mfx:migrations.xml /verbose; I would not recommend using the /verbose output for real automation, but I’m showing off how the tool works. This particular script does a compare straight from source control to a database on my server. You can use variables where I’ve hard coded. That’s it. Works great. Just wanted to share it out there. I have others that I’ll track down and put up here.  

    Read the article

  • Reviewing firewall rules

    - by chmeee
    I need to review firewall rules of a CheckPoint firewall for a customer (with 200+ rules). I have used FWDoc in the past to extract the rules and convert them to other formats but there was some errors with exclusions. I then analyze them manually to produce an improved version of the rules (usually in OOo Calc) with comments. I know there are several visualization techniques but they all go down to analyzing the traffic and I want static analysis. So I was wondering, what process do you follow to analyze firewall rules? What tools do you use (not only for Checkpoint)?

    Read the article

  • Installing PHP APC in Fedora - Unable to initialize module ?

    - by sri
    I have been trying to install APC on my Fedora Apache Server for showing progress bar while uploading files. But I am getting the following PHP Warning while starting XAMPP. Starting XAMPP for Linux 1.7.1... PHP Warning: PHP Startup: apc: Unable to initialize module Module compiled with module API=20090626, debug=0, thread-safety=0 PHP compiled with module API=20060613, debug=0, thread-safety=0 These options need to matchin Unknown on line 0 XAMPP: Starting Apache with SSL (and PHP5)... XAMPP: Starting MySQL... XAMPP: Another FTP daemon is already running. XAMPP for Linux started. My Server Details : OS : Fedora-12 XAMPP version : 1.7.1 PHP Version : 5.2.9 APC Version : 3.1.9 I have tried the process as is mentioned in here : 1)http://2bits.com/articles/installing-php-apc-gnulinux-centos-5.html 2)http://stevejenkins.com/blog/2011/08/how-to-install-apc-alternative-php-cache-on-centos-5-6/

    Read the article

  • Easy Transfer from a dead computer

    - by Nathan DeWitt
    I had a computer that electrocuted me and the company sent me a new one. The hard drive from the old computer works fine and is in my new computer. I would like to transfer my files from the old drive to the new one, preferably using Easy Transfer (old & new computers were Win7). When I go through the Easy Transfer wizard, it assumes my old computer is running and that I can run a process to backup all my data to a single file. However, in my case I have the system drive in my new computer and want to pull the data off it. I would like to avoid rebooting the old computer, to avoid damage to myself or my data. I would like to avoid booting into the old system drive, as my new hardware is significantly different and I imagine I'll run into some missing hardware issues. What's the easiest way to get my data off this drive?

    Read the article

  • Versioning and Continuous Integration with project settings files

    - by Michael Stephenson
    I came across something which was a bit of a pain in the bottom the other week. Our scenario was that we had implemented a helper style assembly which had some custom configuration implemented through the project settings. I'm sure most of you are familiar with this where you end up with a settings file which is viewable through the C# project file and you can configure some basic settings. The settings are embedded in the assembly during compilation to be part of a DefaultValue attribute. You have the ability to override the settings by adding information to your app.config and if the app.config doesn’t override the settings then the embedded default is used. All normal C# stuff so far… Where our pain started was when we implement Continuous Integration and we wanted to version all of this from our build. What I was finding was that the assembly was versioned fine but the embedded default value was maintaining the non CI build version number. I ended up getting this to work by using a build task to change the version numbers in the following files: App.config Settings.settings Settings.designer.cs I think I probably could have got away with just the settings.designer.cs, but wanted to keep them all consistent incase we had to look at the code on the build server for some reason. I think the reason this was painful was because the settings.designer.cs is only updated through Visual Studio and it writes out the code to this file including the DefaultValue attribute when the project is saved rather than as part of the compilation process. The compile just compiles the already existing C# file. As I said we got it working, and it was a bit of a pain. If anyone has a better solution for this I'd love to hear it

    Read the article

  • Sharepoint Server 2007 generates event log entry every 5 minutes - "The SSP Timer Job Distribution L

    - by Teevus
    I get the following error logged into the Event Log every 5 minutes: The SSP Timer Job Distribution List Import Job was not run. Reason: Logon failure: the user has not been granted the requested logon type at this computer In addition, OWSTimer.exe periodically gets into a state where its consuming almost all the CPU and only killing the process or restarting the Sharepoint services fixes it (although I'm not sure if this is a related or seperate issue). I have tried the following (based on various suggestions floating around the web), all to no avail: iisreset (no affect) Added the Sharepoint and Sharepoint Search service accounts to Log on as a batch job and Log on as a service policies in the Group Policies for the domain. I went into the Local Computer Policy on the Sharepoint server and verified that those policies had actually been applied Verified that the Sharepoint and Sharepoint Search service accounts are both in the WSS_WPG group Verified in dcomcnfg that the WSS_WPG group (and indeed the Sharepoint and Sharepoint search service accounts) has local activation rights for SPSearch. Any more suggestions would be valued. Thanks

    Read the article

  • How do I account for changed or forgotten tasks in an estimate?

    - by Andrew
    To handle task-level estimates and time reporting, I have been using (roughly) the technique that Steve McConnell describes in Chapter 10 of Software Estimation. Specifically, when the time comes for me to create task-level estimates (right before coding begins on a project), I determine the tasks at a fairly granular level so that, whenever possible, I have no tasks with a single-point, 50%-confidence estimate greater than four hours. That way, the task estimation process helps with constructing the software while helping me not to forget tasks during estimation. I come up with a range of hours possible for each task also, and using the statistical calculations that McConnell describes along with my historical accuracy data, I can generate estimates at other confidence levels when desired. I feel like this method has been working fairly well for me. We are required to put tasks and their estimates into TFS for tracking, so I use the estimates at the percentage of confidence I am told to use. I am unsure, however, what to do when I do forget a task, or I end up needing to do work that does not neatly fall within one of the tasks I estimated. Of course, trying to avoid this situation is best, but how do I account for forgotten/changed tasks? I want to have the best historical data I can to help me with future estimates, but right now, I basically am just calculating whether I made the 50%-confidence estimate and whether I made it inside the ranged estimate. I'll be happy to clarify what I'm asking if needed -- let me know what is unclear.

    Read the article

  • Recovering a lost website with no backup?

    - by Jeff Atwood
    Unfortunately, our hosting provider experienced 100% data loss, so I've lost all content for two hosted blog websites: http://blog.stackoverflow.com http://www.codinghorror.com (Yes, yes, I absolutely should have done complete offsite backups. Unfortunately, all my backups were on the server itself. So save the lecture; you're 100% absolutely right, but that doesn't help me at the moment. Let's stay focused on the question here!) I am beginning the slow, painful process of recovering the website from web crawler caches. There are a few automated tools for recovering a website from internet web spider (Yahoo, Bing, Google, etc.) caches, like Warrick, but I had some bad results using this: My IP address was quickly banned from Google for using it I get lots of 500 and 503 errors and "waiting 5 minutes…" Ultimately, I can recover the text content faster by hand I've had much better luck by using a list of all blog posts, clicking through to the Google cache and saving each individual file as HTML. While there are a lot of blog posts, there aren't that many, and I figure I deserve some self-flagellation for not having a better backup strategy. Anyway, the important thing is that I've had good luck getting the blog post text this way, and I am definitely able to get the text of the web pages out of the Internet caches. Based on what I've done so far, I am confident I can recover all the lost blog post text and comments. However, the images that go with each blog post are proving…more difficult. Any general tips for recovering website pages from Internet caches, and in particular, places to recover archived images from website pages? (And, again, please, no backup lectures. You're totally, completely, utterly right! But being right isn't solving my immediate problem… Unless you have a time machine…)

    Read the article

< Previous Page | 538 539 540 541 542 543 544 545 546 547 548 549  | Next Page >