Search Results

Search found 22040 results on 882 pages for 'process improvement'.

Page 464/882 | < Previous Page | 460 461 462 463 464 465 466 467 468 469 470 471  | Next Page >

  • Oracle Linux and Oracle VM Hardware Certification Program

    - by Durgam Vahia
    The Oracle Linux and Oracle VM are continuing to see growth in IHV (Independent Hardware Vendor) ecosystem. The Oracle Linux and Oracle VM Hardware Certification Program, also referred as HCL, provides a formal means for hardware vendors to work with Oracle to establish high quality support for the certified hardware platform. Since the beginning of the program, number of hardware partners have certified range of server platforms on Oracle Linux and Oracle VM. Currently, HCL lists over 400 certifications from 10 server vendors and the list continues to grow at a rapid pace. New hardware certification involves close collaboration between Oracle and server partner to ensure that adequate testing is performed on the target server and results are thoroughly reviewed. This rigorous process ensures that when new hardware platform is listed on HCL, it has full support from both Oracle and the respective partner. Additionally, once a certification is achieved with Oracle Linux with the current version of Unbreakable Enterprise Kernel, future minor updates of the software continue to carry over the certification, reducing the need for a re-certification. For the complete list of certified hardware, please visit Oracle Linux and Oracle VM Certified Hardware. Also refer to Frequently Asked Questions for more information.

    Read the article

  • Problem with shared ssh keys

    - by warren
    Following the process I've used in other environments (http://www.trilug.org/pipermail/trilug/Week-of-Mon-20080602/054712.html), I've tried setting-up shared keys between my Mac and my CentOS 4 webserver. I've seen the same problem with my older Ubuntu 7.10 workstation trying to connect via keys to the same webserver. I have tried both dsa and rsa keytypes (sshkeygen -t <type>). The sshd_config file on my webserver seems to be allowing key-based logins: RSAAuthentication yes PubkeyAuthentication yes AuthorizedKeysFile .ssh/authorized_keys And my .ssh/authorized_keys has my dsa and rsa keys added. Where should I be looking for what to change next to make key-based logins "Just Work™"? Is it related to the line #UseDNS yes and sshd is trying to do a reverse-lookup on my IP, but cannot because it's NAT'd?

    Read the article

  • OTBI vs. OBIA

    - by PRajkumar
      What are the differences between OTBI and OBIA?   OTBI -- Oracle Transactional Business Intelligence OBIA – Oracle Business Intelligence Applications   OBIA   1. OBIA is the pre-packaged BI Apps that Oracle has provided for several years. It is the data warehouse based Solution 2. It is based on the Universal data warehouse design with different prebuilt adapters that can connect to various source application to bring the     data into the warehouse 3. It allows consolidating the data from various sources to bring them together 4. It provides a library of metrics that help to measure business 5. It provides set of predefined reports and dashboards 6. OBIA works for multiple sources including E-Business Suite, PeopleSoft, JDE, SAP and FUSION Applications    OTBI 1. It is a real time BI 2. There is no warehouse or ETL process for OTBI 3. It is a Fusion Apps only 4. OTBI leveraging the advanced technologies from both BI platform and ADF to enable the online BI queries against database directly 5. OTBI does not have prebuilt dashboards and reports like OBIA   Note: Both OTBI and OBIA are available from same metadata repository. Some of the repository objects are shared between OTBI and OBIA. It was designed to allows to have following configuration:   OTBI Only OBIA Only OTBI and OBIA coexist    Both OTBI and OBIA are accessing Fusion Apps via the ADF

    Read the article

  • Want to work at Typemock? We’re Hiring

    - by RoyOsherove
    We are looking for a .NET\C++ developer to join the growing Typemock ranks. You need to: Live in Israel know .NET very well (at least 3 years .NET experience – VB.NET or C#, and willing to learn the other one) Have some C++ experience (recent – sometime in the past couple of years) Be interested in Agile development, unit testing and TDD (you don’t have to be an expert. You’ll become one on the job.) have very good english PASSION for programming Advantage to C++ hardcore devs but you don’t have to be one Advantage to Open source contributors   but you don’t have to be one Advantage to public figures (bloggers, speakers..) but you don’t have to be one   You will be working on one of our products, or several of them along the way. Including Typemock Isolator, Test Lint, TeamMate and future products we are working on! We are counting on all our developers to be part of the design process, to take active part in support and customer meetings, and the first day of every two weeks is dedicated to pet projects – you work on anything you want (even if it’s not to do with Typemock)! send an email with your ENGLISH CV to royo AT typemock.com

    Read the article

  • IIS 7.5 FTP IIS Manager Users Login Fail (530)

    - by Jim
    IIS 7.5 FTP IIS Manager Users Login Fail (530) I'm trying to set up a FTP site on IIS 7.5 that allows IIS Manager Users to login. I'm following this guide: http://learn.iis.net/page.aspx/321/configure-ftp-with-iis-7-manager-authentication/. After set up, I cannot login to the FTP using an IIS Manager User account. The client error I got was 530 User cannot log in. Win32 error: Unspecified error. Error details: An error occured during the authentication process. I tried both with or without a virtual host. A Windows account login fine. The only strange thing I noticed was that when setting up Read permission for Network Service, there was an access denied error when setting up permission for "%SystemDrive%\Windows\System32\inetsrv\config\schema". Any thoughts? Thanks!

    Read the article

  • Add game mechanics through equipment?

    - by Sidar
    In a game with different weapons and armor that actually affect more than just player stats, how would you achieve such effect? (These are just examples not concrete ideas ) For example we could have a handgun, uzi and then you have the graviton-gun. The first two would just shoot bullets, the third one does more than just shoot a simple projectile. It could allow the player to hold an enemy and drag it to use it as a meat shield. The player could also wear generic armor but at some point wears armor that can absorb projectiles. After absorbing enough projectiles you can shoot a giant blast. All these weapons/armor have different "behaviors" that either just raise stats or actually add new mechanics. In a simple case most guns would have similar properties and changing a few settings would create a new weapon (handgun shoots at an interval of x amount of seconds, lower this number and you have a machinegun). This obviously does not work if you intend to do more than just shoot projectiles. I'm pretty much stuck on writing the interface structure. While weapons and armor have different purposes they should both be able to process certain effects that change or add mechanics in the game world.

    Read the article

  • Problem with regsvr32 on Windows Server 2010

    - by Chris Anton
    Hi all! I am attempting to register a basic COM dll on a Windows Server 2008 standard box. I run regsvr32 capicom.dll and it reports DllRegisterServer in capicom.dll succeeded. This is the same process we've used for years on Windows Server 2003. Sadly, when I attempt to create the object via a very very basic Microsoft vbscript example Set oStore = CreateObject("CAPICOM.Store") it throws a "ActiveX component can't create object" error. Thinking maybe it was a problem with this dll, I tried a few other DLLs we use with the same result. I tried using the regsvr32 in system, system32, and syswow64 all with the same result. I don't know too much about the differences between each of those, but figured it was worth a shot. The dll is being stored on the d:\ and seems to have correct permissions (though that'd be a different error altogether). Thanks to any help or thoughts you might have!

    Read the article

  • News about Oracle Documaker Enterprise Edition

    - by Susanne Hale
    Updates come from the Documaker front on two counts: Oracle Documaker Awarded XCelent Award for Best Functionality Celent has published a NEW report entitled Document Automation Solution Vendors for Insurers 2011. In the evaluation, Oracle received the XCelent award for Functionality, which recognizes solutions as the leader in this category of the evaluation. According to Celent, “Insurers need to address issues related to the creation and handling of all sorts of documents. Key issues in document creation are complexity and volume. Today, most document automation vendors provide an array of features to cope with the complexity and volume of documents insurers need to generate.” The report ranks ten solution providers on Technology, Functionality, Market Penetration, and Services. Each profile provides detailed information about the vendor and its document automation system, the professional services and support staff it offers, product features, insurance customers and reference feedback, its technology, implementation process, and pricing.  A summary of the report is available at Celent’s web site. Documaker User Group in Wisconsin Holds First Meeting Oracle Documaker users in Wisconsin made the first Documaker User Group meeting a great success, with representation from eight companies. On April 19, over 25 attendees got together to share information, best practices, experiences and concepts related to Documaker and enterprise document automation; they were also able to share feedback with Documaker product management. One insurer shared how they publish and deliver documents to both internal and external customers as quickly and cost effectively as possible, since providing point of sale documents to the sales force in real time is crucial to obtaining and maintaining the book of business. They outlined best practices that ensure consistent development and testing strategies processes are in place to maximize performance and reliability. And, they gave an overview of the supporting applications they developed to monitor and improve performance as well as monitor and track each transaction. Wisconsin User Group meeting photos are posted on the Oracle Insurance Facebook page http://www.facebook.com/OracleInsurance. The Wisconsin User Group will meet again on October 26. If you and other Documaker customers in your area are interested in setting up a user group in your area, please contact Susanne Hale ([email protected]), (703) 927-0863.

    Read the article

  • PeopleSoft 9.2 Financial Management Training – Now Available

    - by Di Seghposs
    A guest post from Oracle University.... Whether you’re part of a project team implementing PeopleSoft 9.2 Financials for your company or a partner implementing for your customer, you should attend some of the new training courses.  Everyone knows project team training is critical at the start of a new implementation, including configuration training on the core application modules being implemented. Oracle offers these courses to help customers and partners understand the functionality most relevant to complete end-to-end business processes, to identify any additional development work that may be necessary to customize applications, and to ensure integration between different modules within the overall business process. Training will provide you with the skills and knowledge needed to ensure a smooth, rapid and successful implementation of your PeopleSoft applications in support of your organization’s financial management processes - including step-by-step instruction for implementing, using, and maintaining your applications. It will also help you understand the application and configuration options to make the right implementation decisions. Courses vary based on your role in the implementation and on-going use of the application, and should be a part of every implementation plan, whether it is for an upgrade or a new rollout. Here’s some of the roles that should consider training: · Configuration or functional implementers · Implementation Consultants (Oracle partners) · Super Users · Business Analysts · Financial Reporting Specialists · Administrators PeopleSoft Financial Management Courses: New Features Course: · PeopleSoft Financial Solutions Rel 9.2 New Features Functional Training: · PeopleSoft General Ledger Rel 9.2 · PeopleSoft Payables Rel 9.2 · PeopleSoft Receivables Rel 9.2 · PeopleSoft Asset Management Rel 9.2 · Expenses Rel 9.2 · PeopleSoft Project Costing Rel 9.2 · PeopleSoft Billing Rel 9.2 · PeopleSoft PS / nVision for General Ledger Rel 9.2 Accelerated Courses (include content from two courses for more experienced team members): · PeopleSoft General Ledger Foundation Accelerated Rel 9.2 · PeopleSoft Billing / Receivables Accelerated Rel 9.2 · PeopleSoft Purchasing / Payable Accelerated Rel 9.2 View PeopleSoft Training Overview Video

    Read the article

  • Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)

    - by Bakhtiyor
    I have mailserver configure using dovecot+postfix+mysql and it was runnig fine in the server(Ubuntu Server). But during last week it stopped working correctly. It doesn't send email. When I try to telnet localhost smtp I'm connecting successfully but when I do mail from:<[email protected]> and hit Enter it hangs on, nothing happen. Having reviewed /var/log/mail.log file I've found out that probably(99%) the problem is on postfix when it is trying to connect to MySQL server. If you see the log file given below you can see that it says Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2). Nov 14 21:54:36 ns1 dovecot: dovecot: Killed with signal 15 (by pid=7731 uid=0 code=kill) Nov 14 21:54:36 ns1 dovecot: Dovecot v1.2.9 starting up (core dumps disabled) Nov 14 21:54:36 ns1 dovecot: auth-worker(default): mysql: Connected to localhost (mailserver) Nov 14 21:54:44 ns1 postfix/postfix-script[7753]: refreshing the Postfix mail system Nov 14 21:54:44 ns1 postfix/master[1670]: reload -- version 2.7.0, configuration /etc/postfix Nov 14 21:54:52 ns1 postfix/trivial-rewrite[7759]: warning: connect to mysql server localhost: Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) Nov 14 21:54:52 ns1 postfix/trivial-rewrite[7759]: fatal: mysql:/etc/postfix/mysql-virtual-alias-maps.cf(0,lock|fold_fix): table lookup problem Nov 14 21:54:53 ns1 postfix/master[1670]: warning: process /usr/lib/postfix/trivial-rewrite pid 7759 exit status 1 Nov 14 21:54:53 ns1 postfix/cleanup[7397]: warning: problem talking to service rewrite: Connection reset by peer Nov 14 21:54:53 ns1 postfix/master[1670]: warning: /usr/lib/postfix/trivial-rewrite: bad command startup -- throttling Nov 14 21:54:53 ns1 postfix/smtpd[7071]: warning: problem talking to service rewrite: Success I tried netstat -ln | grep mysql and it returns unix 2 [ ACC ] STREAM LISTENING 5817 /var/run/mysqld/mysqld.sock. The content of /etc/postfix/mysql-virtual-alias-maps.cf file is here: user = stevejobs password = apple hosts = localhost dbname = mailserver query = SELECT destination FROM virtual_aliases WHERE source='%s' Here I tried to change hosts = 127.0.0.1 but it says warning: connect to mysql server 127.0.0.1: Can't connect to MySQL server on '127.0.0.1' (110) So, I am lost and don't know where else to change in order to solve the problem. Any help would be appreciated highly. Thank you.

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #033

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Spatial Database Definition and Research Documents Here is the definition from Wikipedia about spatial database : A spatial database is a database that is optimized to store and query data related to objects in space, including points, lines and polygons. While typical databases can understand various numeric and character types of data, additional functionality needs to be added for databases to process spatial data types. Select Only Date Part From DateTime – Best Practice A very common question which I receive is how to only get Date or Time part from datetime value. In this blog post I explain the same in very simple words. T-SQL Paging Query Technique Comparison (OVER and ROW_NUMBER()) – CTE vs. Derived Table I have received few emails and comments about my post SQL SERVER – T-SQL Paging Query Technique Comparison – SQL 2000 vs SQL 2005. The main question was is this can be done using CTE? Absolutely! What about Performance? It is identical! Please refer above mentioned article for the history of paging. SQL SERVER – Cannot resolve collation conflict for equal to operation One of the very first error I ever encountered in my career was to resolve this conflict. I have blogged about it and I have realized that many others like me who are facing this error. LEN and DATALENGTH of NULL Simple Example Here is the question for you what is the LEN of NULL value? Well it is very easy – just read the blog. Recovery Models and Selection Very simple and easy explanation of the Database Backup Recovery Model and how to select the best option for you. Explanation SQL SERVER Hash Join Hash join gives best performance when two more join tables are joined and at-least one of them have no index or is not sorted. It is also expected that smaller of the either of table can be read in memory completely (though not necessary). Easy Sequence of SELECT FROM JOIN WHERE GROUP BY HAVING ORDER BY SELECT yourcolumns FROM tablenames JOIN tablenames WHERE condition GROUP BY yourcolumns HAVING aggregatecolumn condition ORDER BY yourcolumns NorthWind Database or AdventureWorks Database – Samples Databases In this blog post we learn how to install Northwind database. I also shared the source where one can download this database as that is used in many examples on MSDN help files. sp_HelpText for sp_HelpText – Puzzle A simple quick puzzle – do you know the answer of it? If not, go ahead and read the blog. 2008 SQL SERVER – 2008 – Step By Step Installation Guide With Images When SQL Server 2008 was newly introduced lots of people had no clue how to install SQL Server 2008 and the amount of the question which I used to receive were so much. I wrote this blog post with the spirit that this will help all the newbies to install SQL Server 2008 with the help of images. Still today this blog post has been bible for all of the people who are confused with SQL Server installation. Inline Variable Assignment I loved this feature. I have always wanted this feature to be present in SQL Server. The last time when I met developers from Microsoft SQL Server, I had talked about this feature. I think this feature saves some time but make the code more readable. Introduction to Policy Management – Enforcing Rules on SQL Server If our company policy is to create all the Stored Procedure with prefix ‘usp’ that developers should be just prevented to create Stored Procedure with any other prefix. Let us see a small tutorial how to create conditions and policy which will prevent any future SP to be created with any other prefix. 2009 Performance Counters from System Views – By Kevin Mckenna Many of you are not aware of this fact that access to performance information is readily available in SQL Server and that too without querying performance counters using a custom application or via perfmon. Till now, this fact has remained undisclosed but through this post I would like to explain you can easily access SQL Server performance counter information. Without putting much effort you will come across the system viewsys.dm_os_performance_counters. As the name suggests, this provides you easy access to the SQL Server performance counter information that is passed on to perfmon, but you can get at it via tsql. Customize Toolbar – Remove Debug Button from Toolbar I was fond of SQL Server Debugger feature in SQL Server 2000. To my utter disappointment, this feature was withdrawn from SQL Server 2005. The button of the debugger is similar to a play button and is used to run debugging commands of Visual Studio. Because of this reason, it gets very much infuriating for developers when they are developing on both – Visual Studio and SSMS. Let us now see how we can remove debugging button from SQL Server Management Studio. Effect of Normalization on Index and Performance A very interesting conversation which started from twitter. If you want to read one link this is the link I encourage you to read it. SSMS Feature – Multi-server Queries Using SQL Server Management Studio (SSMS) DBAs can now query multiple servers from one window. It is quite common for DBAs with large amount of servers to maintain and gather information from multiple SQL Servers and create report. This feature is a blessing for the DBAs, as they can now assemble all the information instantaneously without going anywhere. Query Optimizer Hint ROBUST PLAN – Question to You “ROBUST PLAN” is a kind of query hint which works quite differently than other hints. It does not improve join or force any indexes to use; it just makes sure that a query does not crash due to over the limit size of row. Let me elaborate upon it in the blog post. 2010 Do you really know the difference between various date functions available in SQL Server 2012? Here is a three part story where we explored the same with examples: Fastest Way to Restore the Database Difference Between DATETIME and DATETIME2 Difference Between DATETIME and DATETIME2 – WITH GETDATE Shrinking NDF and MDF Files – Readers’ Opinion Shrinking Database always creates performance degradation and increases fragmentation in the database. I suggest that you keep that in mind before you start reading the following comment. If you are going to say Shrinking Database is bad and evil, here I am saying it first and loud. Now, the comment of Imran is written while keeping in mind only the process showing how the Shrinking Database Operation works. Imran has already explained his understanding and requests further explanation. I have removed the Best Practices section from Imran’s comments, as there are a few corrections. 2011 Solution – Puzzle – SELECT * vs SELECT COUNT(*) This is very interesting question and I am very confident that not every one knows the answer to this question. Let me ask you again – Which will be faster SELECT* or SELECT COUNT (*) or do you think this is apples and oranges comparison. 2012 Service Broker and CAP_CPU_PERCENT – Limiting SQL Server Instances to CPU Usage In SQL Server 2012 there are a few enhancements with regards to SQL Server Resource Governor. One of the enhancement is how the resources are allocated. Let me explain you with examples. Let us understand the entire discussion with the help of three different examples. Finding Size of a Columnstore Index Using DMVs One of the very common question I often see is need of the list of columnstore index along with their size and corresponding table name. I quickly re-wrote a script using DMVs sys.indexes and sys.dm_db_partition_stats. This script gives the size of the columnstore index on disk only. I am sure there will be advanced script to retrieve details related to components associated with the columnstore index. However, I believe following script is sufficient to start getting an idea of columnstore index size. Developer Training Resources and Summary Roundup Developer Training - Importance and Significance - Part 1 In this part we discussed the importance of training in the real world. The most important and valuable resource any company is its employee. Employees who have been well-trained will be better at their jobs and produce a better product.  An employee who is well trained obviously knows more about their job and all the technical aspects. I have a very high opinion about training employees and it is the most important task. Developer Training – Employee Morals and Ethics – Part 2 In this part we discussed the most crucial components of training. Often employees are expecting the company to pay for their training and the company expresses no interest in training the employee. Quite often training expenses are the real issue for both the employee and employer. Developer Training – Difficult Questions and Alternative Perspective - Part 3 This part was the most difficult to write as I tried to address a few difficult questions and answers. Training is such a sensitive issue that many developers when not receiving chance for training think about leaving the organization. Developer Training – Various Options for Developer Training – Part 4 In this part I tried to explore a few methods and options for training. The generic feedback I received on this blog post was short and I should have explored each of the subject of the training in details. I believe there are two big buckets of training 1) Instructor Lead Training and 2) Self Lead Training. Developer Training – A Conclusive Summary- Part 5 There is no better motivation than a personal desire to learn new technology. Honestly there is nothing more personal learning. That “change is the only constant” and “adapt & overcome” are the essential lessons of life. One cannot stop the learning and resist the change. In the IT industry “ego of knowing all” and the “resistance to change” are the most challenging issues. A Quick Look at Logging and Ideas around Logging Question: What is the first thing comes to your mind when you hear the word “Logging”? Strange enough I got a different answer every single time. Let me just list what answer I got from my friends. Let us go over them one by one. Beginning Performance Tuning with SQL Server Execution Plan Solution of Puzzle – Swap Value of Column Without Case Statement Earlier this week I asked a question where I asked how to Swap Values of the column without using CASE Statement. Read here: SQL SERVER – A Puzzle – Swap Value of Column Without Case Statement. I have proposed 3 different solutions in the blog posts itself. I had requested the help of the community to come up with alternate solutions and honestly I am stunned and amazed by the qualified entries. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • I want to hit Apex SQL with a big stick

    - by Michael Stephenson
    <Whinge> Thought id just have a little whinge about this product which caused me a load of grief the other day..... So the background was that my development machine had a completely full hard disk which I needed to sort out.  Upon investigation I found the issue was that the msdb database had managed to get very large. This was caused because a long time ago (and I cant even remember why) I tried out Apex SQL.  After a few days I decided to uninstall it and thought nothing more of it.  What I didnt realise was that uninstalling it doesnt actually uninstall it (and it doesnt inform you about this), but there was still some assemblies left on my machine.  Everytime SQL Server was running it was starting the Apex SQL Connection monitor which was then running in the background and regularly recording information in the msdb database.  Over time it had recorded enough to fill the disk. The below article advises how to sort this out by removing this fully so if your having a problem then try this out:http://knowledgebase.apexsql.com/2007/08/how-to-uninstall-apexsqlconnectionmonit_09.htm Once this was sorted out its interesting to read the above article because I just dont think the approach used by the vendor of this software is a very good one.  So for the Apex team just wanted to pass on a thought: If I want to uninstall your product you should tell me if stuff is left on the machine especially if a process will be running which is going to fill my machine with useless data, </Whinge>

    Read the article

  • Deploy to JBoss 7 using Hudson Deploy plugin

    - by Uluk Biy
    I have 2 machines where one of them contains the Hudson CI and other JBoss 7 AS. In Hudson, I have installed "Deploy Plugin", created new job and filled required JBoss manager user connection fields. When I run the job, the project successfully built however the deployment process to remote JBoss AS is not being triggered. No errors or messages about the deployment in log. What should I do? EDIT The deployment is triggered (at least expected) as "Post-build Action" with parameters: [x] Deploy war/ear to a container WAR/EAR files : **/*.war Container : JBoss 7.x Manager user name : test Manager password : * * * * JBoss URL : http://192.168.1.2 JBoss JMX Management port : 9990 It is not a separate job.

    Read the article

  • Oracle Social Network -The Social Glue for Enterprise Applications

    - by me
    Tom Petrocelli of Enterprise Strategy Group published a report recently, “Oracle Social Network: The Social Glue for Enterprise Applications”, on Oracle Social Network (OSN) and how traditional social products create social silos whereas OSN is the “social glue” for enterprise applications.  This report supports the point of Oracle’s Social Business Strategy to seamless integrate social capabilities into the main business processes. Quote from report: “Oracle has adopted the correct approach to creating a social layer and socially enabled applications. Oracle Social Network is not simply another enterprise social network product; it is a complete social layer for the enterprise application stack. This approach will serve Oracle users well in the future.” OSN allow to capture the related Conversations of a business process right where it’s happens – within the respective Business application.  Fusion CRM is an excellent example for this approach. Quote from report: “Oracle’s new software, Oracle Social Network, is an example of a solution to the silo problem. While Oracle fields a typical enterprise social network application with microblogging, file sharing, shared documents or wikis, and activity streams, the front-end application is only a small part of what Oracle Social Network does. Instead, Oracle Social Network is a platform that provides social features as a service to other enterprise applications. In effect, Oracle Social Network socially enables all of Oracle’s enterprise applications—all enterprise applications really—with not only the same features, but also the same conversations. As a result, the social conversations act as a conduit for inter-application communication and collaboration.” Source: ESG Research Report, Oracle Social Network: The Social Glue for Enterprise Applications, August 2012. cross-post from Oracle WebCenter blog

    Read the article

  • System backup with Norton Ghost

    - by Mehper C. Palavuzlar
    I will upgrade my system from Windows Vista Home Premium (x64) to Windows 7 (x64). Before starting the upgrade process, I want to back up my current system with Norton Ghost. I have never used it before, so I need assistance to do that. At the moment, there is 139 GB used space by Vista and I have 1 TB external HD connected via USB. If you can tell me the step by step instructions about how to back up and how to restore if the upgrade somehow fails, I'll appreciate that. Thanks.

    Read the article

  • ERP Customizations...Are your CEMLI’s Holding You Back?

    - by Di Seghposs
    Upgrading your Oracle applications can be an intimidating and nerve-racking experience depending on your organization’s level of customizations. Often times they have an on-going effect on your organization causing increased complexity, less flexibility, and additional maintenance cost. Organizations that reduce their dependency on customizations: Reduce complexity by up to 50% Reduce the cost of future maintenance and upgrades  Create a foundation for easier enablement of new product functionality and business value Oracle Consulting offers a complimentary service called Oracle CEMLI Benchmark and Analysis, which is an effective first step used to evaluate your E-Business Suite application CEMLI complexity.  The service will help your organization understand the number of customizations you have, how you rank against your peer groups and identifies target areas for customization reduction by providing a catalogue of customizations by object type, CEMLI ID or Project ID and Business Process. Whether you’re currently deployed on-premise, managed private cloud or considering a move to the cloud, understanding your customizations is critical as you begin an upgrade.  Learn how you can reduce complexity and overall TCO with this informative screencast.  For more information or to take advantage of this complimentary service today, contact Oracle Consulting directly at [email protected]

    Read the article

  • Running a simple integration scenario using the Oracle Big Data Connectors on Hadoop/HDFS cluster

    - by hamsun
    Between the elephant ( the tradional image of the Hadoop framework) and the Oracle Iron Man (Big Data..) an english setter could be seen as the link to the right data Data, Data, Data, we are living in a world where data technology based on popular applications , search engines, Webservers, rich sms messages, email clients, weather forecasts and so on, have a predominant role in our life. More and more technologies are used to analyze/track our behavior, try to detect patterns, to propose us "the best/right user experience" from the Google Ad services, to Telco companies or large consumer sites (like Amazon:) ). The more we use all these technologies, the more we generate data, and thus there is a need of huge data marts and specific hardware/software servers (as the Exadata servers) in order to treat/analyze/understand the trends and offer new services to the users. Some of these "data feeds" are raw, unstructured data, and cannot be processed effectively by normal SQL queries. Large scale distributed processing was an emerging infrastructure need and the solution seemed to be the "collocation of compute nodes with the data", which in turn leaded to MapReduce parallel patterns and the development of the Hadoop framework, which is based on MapReduce and a distributed file system (HDFS) that runs on larger clusters of rather inexpensive servers. Several Oracle products are using the distributed / aggregation pattern for data calculation ( Coherence, NoSql, times ten ) so once that you are familiar with one of these technologies, lets says with coherence aggregators, you will find the whole Hadoop, MapReduce concept very similar. Oracle Big Data Appliance is based on the Cloudera Distribution (CDH), and the Oracle Big Data Connectors can be plugged on a Hadoop cluster running the CDH distribution or equivalent Hadoop clusters. In this paper, a "lab like" implementation of this concept is done on a single Linux X64 server, running an Oracle Database 11g Enterprise Edition Release 11.2.0.4.0, and a single node Apache hadoop-1.2.1 HDFS cluster, using the SQL connector for HDFS. The whole setup is fairly simple: Install on a Linux x64 server ( or virtual box appliance) an Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 server Get the Apache Hadoop distribution from: http://mir2.ovh.net/ftp.apache.org/dist/hadoop/common/hadoop-1.2.1. Get the Oracle Big Data Connectors from: http://www.oracle.com/technetwork/bdc/big-data-connectors/downloads/index.html?ssSourceSiteId=ocomen. Check the java version of your Linux server with the command: java -version java version "1.7.0_40" Java(TM) SE Runtime Environment (build 1.7.0_40-b43) Java HotSpot(TM) 64-Bit Server VM (build 24.0-b56, mixed mode) Decompress the hadoop hadoop-1.2.1.tar.gz file to /u01/hadoop-1.2.1 Modify your .bash_profile export HADOOP_HOME=/u01/hadoop-1.2.1 export PATH=$PATH:$HADOOP_HOME/bin export HIVE_HOME=/u01/hive-0.11.0 export PATH=$PATH:$HADOOP_HOME/bin:$HIVE_HOME/bin (also see my sample .bash_profile) Set up ssh trust for Hadoop process, this is a mandatory step, in our case we have to establish a "local trust" as will are using a single node configuration copy the new public keys to the list of authorized keys connect and test the ssh setup to your localhost: We will run a "pseudo-Hadoop cluster", in what is called "local standalone mode", all the Hadoop java components are running in one Java process, this is enough for our demo purposes. We need to "fine tune" some Hadoop configuration files, we have to go at our $HADOOP_HOME/conf, and modify the files: core-site.xml hdfs-site.xml mapred-site.xml check that the hadoop binaries are referenced correctly from the command line by executing: hadoop -version As Hadoop is managing our "clustered HDFS" file system we have to create "the mount point" and format it , the mount point will be declared to core-site.xml as: The layout under the /u01/hadoop-1.2.1/data will be created and used by other hadoop components (MapReduce = /mapred/...) HDFS is using the /dfs/... layout structure format the HDFS hadoop file system: Start the java components for the HDFS system As an additional check, you can use the GUI Hadoop browsers to check the content of your HDFS configurations: Once our HDFS Hadoop setup is done you can use the HDFS file system to store data ( big data : )), and plug them back and forth to Oracle Databases by the means of the Big Data Connectors ( which is the next configuration step). You can create / use a Hive db, but in our case we will make a simple integration of "raw data" , through the creation of an External Table to a local Oracle instance ( on the same Linux box, we run the Hadoop HDFS one node cluster and one Oracle DB). Download some public "big data", I use the site: http://france.meteofrance.com/france/observations, from where I can get *.csv files for my big data simulations :). Here is the data layout of my example file: Download the Big Data Connector from the OTN (oraosch-2.2.0.zip), unzip it to your local file system (see picture below) Modify your environment in order to access the connector libraries , and make the following test: [oracle@dg1 bin]$./hdfs_stream Usage: hdfs_stream locationFile [oracle@dg1 bin]$ Load the data to the Hadoop hdfs file system: hadoop fs -mkdir bgtest_data hadoop fs -put obsFrance.txt bgtest_data/obsFrance.txt hadoop fs -ls /user/oracle/bgtest_data/obsFrance.txt [oracle@dg1 bg-data-raw]$ hadoop fs -ls /user/oracle/bgtest_data/obsFrance.txt Found 1 items -rw-r--r-- 1 oracle supergroup 54103 2013-10-22 06:10 /user/oracle/bgtest_data/obsFrance.txt [oracle@dg1 bg-data-raw]$hadoop fs -ls hdfs:///user/oracle/bgtest_data/obsFrance.txt Found 1 items -rw-r--r-- 1 oracle supergroup 54103 2013-10-22 06:10 /user/oracle/bgtest_data/obsFrance.txt Check the content of the HDFS with the browser UI: Start the Oracle database, and run the following script in order to create the Oracle database user, the Oracle directories for the Oracle Big Data Connector (dg1 it’s my own db id replace accordingly yours): #!/bin/bash export ORAENV_ASK=NO export ORACLE_SID=dg1 . oraenv sqlplus /nolog <<EOF CONNECT / AS sysdba; CREATE OR REPLACE DIRECTORY osch_bin_path AS '/u01/orahdfs-2.2.0/bin'; CREATE USER BGUSER IDENTIFIED BY oracle; GRANT CREATE SESSION, CREATE TABLE TO BGUSER; GRANT EXECUTE ON sys.utl_file TO BGUSER; GRANT READ, EXECUTE ON DIRECTORY osch_bin_path TO BGUSER; CREATE OR REPLACE DIRECTORY BGT_LOG_DIR as '/u01/BG_TEST/logs'; GRANT READ, WRITE ON DIRECTORY BGT_LOG_DIR to BGUSER; CREATE OR REPLACE DIRECTORY BGT_DATA_DIR as '/u01/BG_TEST/data'; GRANT READ, WRITE ON DIRECTORY BGT_DATA_DIR to BGUSER; EOF Put the following in a file named t3.sh and make it executable, hadoop jar $OSCH_HOME/jlib/orahdfs.jar \ oracle.hadoop.exttab.ExternalTable \ -D oracle.hadoop.exttab.tableName=BGTEST_DP_XTAB \ -D oracle.hadoop.exttab.defaultDirectory=BGT_DATA_DIR \ -D oracle.hadoop.exttab.dataPaths="hdfs:///user/oracle/bgtest_data/obsFrance.txt" \ -D oracle.hadoop.exttab.columnCount=7 \ -D oracle.hadoop.connection.url=jdbc:oracle:thin:@//localhost:1521/dg1 \ -D oracle.hadoop.connection.user=BGUSER \ -D oracle.hadoop.exttab.printStackTrace=true \ -createTable --noexecute then test the creation fo the external table with it: [oracle@dg1 samples]$ ./t3.sh ./t3.sh: line 2: /u01/orahdfs-2.2.0: Is a directory Oracle SQL Connector for HDFS Release 2.2.0 - Production Copyright (c) 2011, 2013, Oracle and/or its affiliates. All rights reserved. Enter Database Password:] The create table command was not executed. The following table would be created. CREATE TABLE "BGUSER"."BGTEST_DP_XTAB" ( "C1" VARCHAR2(4000), "C2" VARCHAR2(4000), "C3" VARCHAR2(4000), "C4" VARCHAR2(4000), "C5" VARCHAR2(4000), "C6" VARCHAR2(4000), "C7" VARCHAR2(4000) ) ORGANIZATION EXTERNAL ( TYPE ORACLE_LOADER DEFAULT DIRECTORY "BGT_DATA_DIR" ACCESS PARAMETERS ( RECORDS DELIMITED BY 0X'0A' CHARACTERSET AL32UTF8 STRING SIZES ARE IN CHARACTERS PREPROCESSOR "OSCH_BIN_PATH":'hdfs_stream' FIELDS TERMINATED BY 0X'2C' MISSING FIELD VALUES ARE NULL ( "C1" CHAR(4000), "C2" CHAR(4000), "C3" CHAR(4000), "C4" CHAR(4000), "C5" CHAR(4000), "C6" CHAR(4000), "C7" CHAR(4000) ) ) LOCATION ( 'osch-20131022081035-74-1' ) ) PARALLEL REJECT LIMIT UNLIMITED; The following location files would be created. osch-20131022081035-74-1 contains 1 URI, 54103 bytes 54103 hdfs://localhost:19000/user/oracle/bgtest_data/obsFrance.txt Then remove the --noexecute flag and create the external Oracle table for the Hadoop data. Check the results: The create table command succeeded. CREATE TABLE "BGUSER"."BGTEST_DP_XTAB" ( "C1" VARCHAR2(4000), "C2" VARCHAR2(4000), "C3" VARCHAR2(4000), "C4" VARCHAR2(4000), "C5" VARCHAR2(4000), "C6" VARCHAR2(4000), "C7" VARCHAR2(4000) ) ORGANIZATION EXTERNAL ( TYPE ORACLE_LOADER DEFAULT DIRECTORY "BGT_DATA_DIR" ACCESS PARAMETERS ( RECORDS DELIMITED BY 0X'0A' CHARACTERSET AL32UTF8 STRING SIZES ARE IN CHARACTERS PREPROCESSOR "OSCH_BIN_PATH":'hdfs_stream' FIELDS TERMINATED BY 0X'2C' MISSING FIELD VALUES ARE NULL ( "C1" CHAR(4000), "C2" CHAR(4000), "C3" CHAR(4000), "C4" CHAR(4000), "C5" CHAR(4000), "C6" CHAR(4000), "C7" CHAR(4000) ) ) LOCATION ( 'osch-20131022081719-3239-1' ) ) PARALLEL REJECT LIMIT UNLIMITED; The following location files were created. osch-20131022081719-3239-1 contains 1 URI, 54103 bytes 54103 hdfs://localhost:19000/user/oracle/bgtest_data/obsFrance.txt This is the view from the SQL Developer: and finally the number of lines in the oracle table, imported from our Hadoop HDFS cluster SQL select count(*) from "BGUSER"."BGTEST_DP_XTAB"; COUNT(*) ---------- 1151 In a next post we will integrate data from a Hive database, and try some ODI integrations with the ODI Big Data connector. Our simplistic approach is just a step to show you how these unstructured data world can be integrated to Oracle infrastructure. Hadoop, BigData, NoSql are great technologies, they are widely used and Oracle is offering a large integration infrastructure based on these services. Oracle University presents a complete curriculum on all the Oracle related technologies: NoSQL: Introduction to Oracle NoSQL Database Using Oracle NoSQL Database Big Data: Introduction to Big Data Oracle Big Data Essentials Oracle Big Data Overview Oracle Data Integrator: Oracle Data Integrator 12c: New Features Oracle Data Integrator 11g: Integration and Administration Oracle Data Integrator: Administration and Development Oracle Data Integrator 11g: Advanced Integration and Development Oracle Coherence 12c: Oracle Coherence 12c: New Features Oracle Coherence 12c: Share and Manage Data in Clusters Oracle Coherence 12c: Oracle GoldenGate 11g: Fundamentals for Oracle Oracle GoldenGate 11g: Fundamentals for SQL Server Oracle GoldenGate 11g Fundamentals for Oracle Oracle GoldenGate 11g Fundamentals for DB2 Oracle GoldenGate 11g Fundamentals for Teradata Oracle GoldenGate 11g Fundamentals for HP NonStop Oracle GoldenGate 11g Management Pack: Overview Oracle GoldenGate 11g Troubleshooting and Tuning Oracle GoldenGate 11g: Advanced Configuration for Oracle Other Resources: Apache Hadoop : http://hadoop.apache.org/ is the homepage for these technologies. "Hadoop Definitive Guide 3rdEdition" by Tom White is a classical lecture for people who want to know more about Hadoop , and some active "googling " will also give you some more references. About the author: Eugene Simos is based in France and joined Oracle through the BEA-Weblogic Acquisition, where he worked for the Professional Service, Support, end Education for major accounts across the EMEA Region. He worked in the banking sector, ATT, Telco companies giving him extensive experience on production environments. Eugen currently specializes in Oracle Fusion Middleware teaching an array of courses on Weblogic/Webcenter, Content,BPM /SOA/Identity-Security/GoldenGate/Virtualisation/Unified Comm Suite) throughout the EMEA region.

    Read the article

  • Oracle @ AIIM Conference

    - by [email protected]
    Oracle will be at the AIIM Conference and Exposition next week in Philadelphia. On the opening morning, Robert Shimp, Group Vice President, Global Technology Business Unit, of Oracle Corporation, will moderate an executive keynote panel. Mr. Shimp will lead four Oracle customer executives through a lively discussion of how innovative organizations are driving the integration of content management with their core business processes on Tuesday April 20th at 8:45 AM. Our panelists are: CINDY BIXLER, CIO, Embry Riddle Aeronautical University TOM SHOWALTER, Managing Director, JP Morgan Chase IRFAN MOTIWALA, Vice President, Moody's Investors Service MIT MONICA CROCKER, CRM, PMP, Corporate Records Manager, Land O'Lakes For more information on our panelists, click here. Oracle will be in booth #2113 at the AIIM Expo. Come by and enter the daily raffle to win a Netbook! Oracle and Oracle partners will demonstrate solutions that increase productivity, reduce costs and ensure compliance for business processes such as accounts payable, human resource onboarding, marketing campaigns, sales management, large scale diagrams for facilities and manufacturing, case management, and others Oracle products including Oracle Universal Content Management, Oracle Imaging and Process Management, Oracle Universal Records Management, Oracle WebCenter, Oracle AutoVue, and Oracle Secure Enterprise Search will be demonstrated in the booth. Oracle will host a private event at The Field House Sports Bar - see your Oracle representative for more details Oracle customers can meet in private meeting rooms with their Oracle representatives Key Sessions Besides the opening morning keynote panel, Oracle will have a number of other sessions at the conference. Oracle Content Management will be featured in the session G08 - A Passage to Improving Healthcare: Enhancing EMR with Electronic Records Wednesday April 21st 2:25PM-3:10PM Kristina Parma of Oracle partner ImageSource will deliver this session, along with Pam Doyle of Fujitsu and Nancy Gladish of Swedish Medical Center. Kristina will also be in the Oracle booth to talk about this solution. On Tuesday April 20th at 4:05 PM Ajay Gandhi of Oracle will deliver a session entitled Harnessing SharePoint Content for Enterprise Processes in PeopleSoft, Siebel, E-Business Suite and JD Edwards Tuesday April 20th 1:15PM-1:45PM - Bringing Content Management to Your AP, HR, Sales and Marketing Processes - Application Showcase Theater (on the AIIM Expo Floor - Booth 1549 Wednesday April 21st 12:30PM-1:00PM - Embed and Edit Content Anywhere - Application Showcase Theater (on the AIIM Expo Floor - Booth 1549 For more information, see the AIIM Expo page on the Oracle website.

    Read the article

  • Windows Explorer keeps crashing in Windows 8.1

    - by Jonathan Allen
    This started a few days ago on a machine that has been running WIn 8.1 for over a month. I just finished a clean install of Windows 8.1 with a handful of apps like Dropbox and Office. Yet I'm still getting Windows Explorer crashes such as this: Faulting application name: Explorer.EXE, version: 6.3.9600.16408, time stamp: 0x523d251b Faulting module name: SHELL32.dll, version: 6.3.9600.16409, time stamp: 0x523fac81 Exception code: 0xc00000fd Fault offset: 0x0000000000189247 Faulting process id: 0xee8 Faulting application start time: 0x01ced2a62f6e5b2e Faulting application path: C:\WINDOWS\Explorer.EXE Faulting module path: C:\WINDOWS\system32\SHELL32.dll Report Id: e25c6b13-3eb4-11e3-be7a-2cd05a597b87 Faulting package full name: Faulting package-relative application ID: I assume that it is some sort of shell extension, but I don't know how to figure out which one.

    Read the article

  • What extra permission settings were added in Windows Server 2003 over Windows Server 2000?

    - by Jon Seigel
    We have a domain controller currently running Windows Server 2000, and we're in the process of upgrading some of our workstations to Windows 7. The problem is that users are getting access denied messages to things they should be able to do, even trivial things like deleting shortcuts from the desktop. The users run at less than administrative levels, which we want to maintain. We think this is caused by Windows 7 having extra security permission settings that are getting defaulted to denied, because the new settings wouldn't actually exist in the Windows 2000 profiles. The reason I'm asking about Windows 2003 Server is because we have an available license of that, and not to 2008 (which would likely solve the problem completely, but costs $). So what I'd like to find out is if the permission settings in 2003 will be sufficient for our needs to justify upgrading the domain controller to 2003.

    Read the article

  • How to restart php-cgi automatically with spawn-fcgi

    - by mrm8
    I'm running nginx with php as fcgi. It's working just fine, however, php-cgi keeps on exit()ing after serving 500 requests. I tried increasing that value (PHP_FCGI_MAX_REQUESTS), and that worked, but that seems to be a workaround. Then I set it to 0, and it didn't exit() yet. But I think there's a reason why php-cgi should be restarted. At the moment, I'm running php-cgi with spawn-fcgi: when the php process exits, spawn-fcgi exits, too. Now, is there a way to automatically restart php (without dirty hacks like while [ 1 ]; do spawn-fcgi; done etc)?

    Read the article

  • MVVM Light V3 released at #MIX10

    - by Laurent Bugnion
    During my session “Understanding the MVVM pattern” at MIX10 in Vegas, I showed some components of the MVVM Light toolkit V3, which gave me the occasion to announce the release of version 3. This version has been in alpha stage for a while, and has been tested by many users. it is very stable, and ready for a release. So here we go! What’s new What’s new in MVVM Light Toolkit V3 is the topic of the next post. Cleaning up I would recommend cleaning up older versions before installing V3. I prepared a page explaining how to do that manually. Unfortunately I didn’t have time to create an automatic cleaner/installer, this is very high on my list but with the book and the conferences going on, it will take a little more time. Cleaning up is recommended because I changed the name of some DLLs to avoid some confusion (between the WPF3.5 and WPF4 version, as well as between the SL3 and SL4 versions). More details in the section titled “Compatibility”. Installation Installing MVVM Light toolkit is the manual process of unzipping a few files. The installation page has been updated to reflect the newest information. Compatibility MVVM Light toolkit V3 has components for the following environments and frameworks: Visual Studio 2008: Silverlight 3 Windows Presentation Foundation 3.5 SP1 Expression Blend 3 Silverlight 3 Windows Presentation Foundation 3.5 SP1 Visual Studio 2010 RC Silverlight 3 Silverlight 4 Windows Presentation Foundation 3.5 SP1 Windows Presentation Foundation 4 Silverlight for Windows Phone 7 series Expression Blend 4 beta Silverlight 3 Silverlight 4 Windows Presentation Foundation 3.5 SP1 Windows Presentation Foundation 4 Feedback As usual I welcome your constructive feedback. If you want the issue to be discussed in public, the best way is through the discussion page on the Codeplex site. if you wish to keep the conversation private, please check my Contact page for ways to talk to me. Video, tutorials There are a few new videos and tutorials available for the MVVM Light toolkit. The material is listed on the Get Started page, under “tutorials”.   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • Applications are now open for the Microsoft Accelerator for Windows Azure - 2013

    - by ScottGu
    In October, I introduced the finalists for the Microsoft Accelerator for Windows Azure, powered by TechStars. Over the past couple of months, these startups have been mentored by business and technology leaders, met with investors, learned from each other, and, most importantly, been building great products. You can learn more about the startups in the first class and how they’re using Windows Azure here. As the first class approaches Demo Day on January 17th, I’m happy to announce that today we are opening applications for the second class of the Microsoft Accelerator for Windows Azure. The second class will begin on April 1,, 2013 and conclude with Demo Day on June 26, 2013. If you are currently working at a startup or considering founding your own company, I encourage you to apply. We’re accepting applications through February 1st, 2013. You can find more information about the Accelerator and the application process here. It’s been truly inspiring to work with the current class of startups. This inaugural class has brought with them incredible energy and innovation and I look forward to reviewing the applications for this next class. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Icinga error "Icinga Startup Delay does not exist" although it does

    - by aaron
    I just installed icinga to monitor my server following this guide: http://docs.icinga.org/0.8.1/en/wb_quickstart-idoutils.html Everything built and installed correctly, but icinga is reporting a critical error with the reason: "The command defined for service Icinga Startup Delay does not exist" However, I can see that ${ICINGA_BASE}/etc/objects/localhost.cfg contains: define service{ use local-service ; Name of service template to use host_name localhost service_description Icinga Startup Delay check_command check_icinga_startup_delay notifications_enabled 0 } and ${ICINGA_BASE}/etc/objects/commands.cfg contains: define command { command_name check_icinga_startup_delay command_line $USER1$/check_dummy 0 "Icinga started with $$(($EVENTSTARTTIME$-$PROCESSSTARTTIME$)) seconds delay | delay=$$(($EVENTSTARTTIME$-$PROCESSSTARTTIME$))" } both of these files had not been modified since the whole make/install process. I am running on Ubuntu 10.04, most recent build of icinga-core, and apache2 2.2.14 What must I do to tell Icinga that the command exists? Or is the problem that check_dummy does not exist? Where or how would I define that?

    Read the article

  • 8051 MCU debug board function

    - by b-gen-jack-o-neill
    Hi, in school I have written many programs for 8051 compatible CPU. But I never actually knew how our "debug" sets worked. I mean, we test our programs in special sets, which actually allow you to very simply load program to CPU via PC serial port. But I thing you know this musch more better than I. But how it works? I mean, I know there is chip which adjusts signal level from PC serial port to TTL logic, and than connected to serial line of 8051. But thats all I know. Actually even my teacher doesen´t know how it works, since school bought it all. So, I suspect there is some program already running in the 8051 which handles communication and stores your program into memory, am I right? But, how can you make 8051 to process instructions from different location than ROM? Becouse if I am right, you cannot write into ROM memory by any instruction, as well as 8051 can only read instructions from ROM?

    Read the article

< Previous Page | 460 461 462 463 464 465 466 467 468 469 470 471  | Next Page >