Search Results

Search found 69128 results on 2766 pages for 'oracle data integrator'.

Page 494/2766 | < Previous Page | 490 491 492 493 494 495 496 497 498 499 500 501  | Next Page >

  • JMX Based Monitoring - Part Three - Web App Server Monitoring

    - by Anthony Shorten
    In the last blog entry I showed a technique for integrating a JMX console with Oracle WebLogic which is a standard feature of Oracle WebLogic 11g. Customers on other Web Application servers and other versions of Oracle WebLogic can refer to the documentation provided with the server to do a similar thing. In this blog entry I am going to discuss a new feature that is only present in Oracle Utilities Application Framework 4 and above that allows JMX to be used for management and monitoring the Oracle Utilities Web Applications. In this case JMX can be used to perform monitoring as well as provide the management of the cache. In Oracle Utilities Application Framework you can enable Web Application Server JMX monitoring that is unique to the framework by specifying a JMX port number in RMI Port number for JMX Web setting and initial credentials in the JMX Enablement System User ID and JMX Enablement System Password configuration options. These options are available using the configureEnv[.sh] -a utility. Once this is information is supplied a number of configuration files are built (by the initialSetup[.sh] utility) to configure the facility: spl.properties - contains the JMX URL, the security configuration and the mbeans that are enabled. For example, on my demonstration machine: spl.runtime.management.rmi.port=6740 spl.runtime.management.connector.url.default=service:jmx:rmi:///jndi/rmi://localhost:6740/oracle/ouaf/webAppConnector jmx.remote.x.password.file=scripts/ouaf.jmx.password.file jmx.remote.x.access.file=scripts/ouaf.jmx.access.file ouaf.jmx.com.splwg.base.support.management.mbean.JVMInfo=enabled ouaf.jmx.com.splwg.base.web.mbeans.FlushBean=enabled ouaf.jmx.* files - contain the userid and password. The default setup uses the JMX default security configuration. You can use additional security features by altering the spl.properties file manually or using a custom template. For more security options see the JMX Site. Once it has been configured and the changes reflected in the product using the initialSetup[.sh] utility the JMX facility can be used. For illustrative purposes, I will use jconsole but any JSR160 complaint browser or client can be used (with the appropriate configuration). Once you start jconsole (ensure that splenviron[.sh] is executed prior to execution to set the environment variables or for remote connection, ensure java is in your path and jconsole.jar in your classpath) you specify the URL in the spl.management.connnector.url.default entry and the credentials you specified in the jmx.remote.x.* files. Remember these are encrypted by default so if you try and view the file you may be able to decipher it visually. For example: There are three Mbeans available to you: flushBean - This is a JMX replacement for the jsp versions of the flush utilities provided in previous releases of the Oracle Utilities Application Framework. You can manage the cache using the provided operations from JMX. The jsp versions of the flush utilities are still provided, for backward compatibility, but now are authorization controlled. JVMInfo - This is a JMX replacement for the jsp version of the JVMInfo screen used by support to get a handle on JVM information. This information is environmental not operational and is used for support purposes. The jsp versions of the JVMInfo utilities are still provided, for backward compatibility, but now is also authorization controlled. JVMSystem - This is an implementation of the Java system MXBeans for use in monitoring. We provide our own implementation of the base Mbeans to save on creating another JMX configuration for internal monitoring and to provide a consistent interface across platforms for the MXBeans. This Mbean is disabled by default and can be enabled using the enableJVMSystemBeans operation. This Mbean allows for the monitoring of the ClassLoading, Memory, OperatingSystem, Runtime and the Thread MX beans. Refer to the Server Administration Guides provided with your product and the Technical Best Practices Whitepaper for information about individual statistics. The Web Application Server JMX monitoring allows greater visibility for monitoring and management of the Oracle Utilities Application Framework application from jconsole or any JSR160 compliant JMX browser or JMX console.

    Read the article

  • Enterprise Manager in EPM 11.1.2.x...a game of hide and seek!

    - by THE
    Normal 0 21 false false false DE X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} guest article: Maurice Bauhahn: Users of Oracle Hyperion Enterprise Performance Management 11.1.2.0 and 11.1.2.1 may puzzle why the URL http://<servername>:7001/em may not conjure up Enterprise Manager Fusion Middleware Control. This powerful tool has been installed by default...but WebLogic may not have been 'Extended' to allow you to call it up (we are hopeful this ‘Extend’  step will not be needed with 11.1.2.2). The explanation is on pages 425 and following of the following document: http://www.oracle.com/technetwork/middleware/bi-foundation/epm-tips-issues-1-72-427329.pdf A close look at the screen dumps in that section reveals a somewhat scary prospect, however: the non-AdminServer servlets had all failed (see the red down-arrow icons to the right of their names) after the configuration! Of course you would want to avoid that scenario! A rephrasing of the instructions might help: Ensure the WebLogic AdminServer is not running (in a default scenario that would mean port 7001 is not active). Ensure you have logged into the computer as the installing owner of EPM. Since Enterprise Manager uses a LOT of resources, be sure that there is adequate free RAM to accommodate the added load. On the machine where WebLogic AdminServer is set up (typically the Foundation Services machine), run \Oracle\Middleware\wlserver_10.3\common\bin\config (config.sh on Unix). Select the 'Extend an existing WebLogic domain' option, and click the 'Next' button. Select the domain being used by EPM System. - Typically, the default domain is created under /Oracle/Middleware/user_projects/domains and is named EPMSystem. - Click 'Next'. Under 'Extend my domain automatically to support the following added products' - place a check mark before 'Oracle Enterprise Manager - 11.1.1.0 [oracle_common]' to select it. - Continue accepting the defaults by clicking 'Next' on each page until - on the last page you click 'Extend'. - The system will grind for a few minutes while it configures (deploys?) EM. - Start the AdminServer. Sometimes there is contention in the startup order of the various servlets (resulting in some not coming up). To avoid that problem on Microsoft Windows machines you may start and stop services via the following analogous command line commands to those run on Linux/Unix (these more carefully space out timings of these events): Ensure EPM is up:\Oracle\Middleware\user_projects\epmsystem1\bin\start.bat Ensure WebLogic is up:\Oracle\Middleware\user_projects\domains\EPMSystem\bin\startWebLogic.cmd Shut down WebLogic:\Oracle\Middleware\user_projects\domains\EPMSystem\bin\stopWebLogic.cmd Shut down EPM:\Oracle\Middleware\user_projects\epmsystem1\bin\stop.bat  Now you should be able to more successfully troubleshoot with the EM tool:

    Read the article

  • Updated SOA Documents now available in ITSO Reference Library

    - by Bob Rhubart
    Nine documents within the IT Strategies from Oracle (ITSO) reference library have recently been updated. (Access to the ITSO collection is free to registered Oracle.com members -- and that membership is free.) All nine documents fall within the Service Oriented Architecture section of the ITSO collection, and cover the following topics: SOA Practitioner Guides Creating an SOA Roadmap (PDF, 54 pages, published: February 2012) The secret to successful SOA is to build a roadmap that can be successfully executed. SOA offers an opportunity to adopt an iterative technique to deliver solutions incrementally. This document offers a structured, iterative methodology to help you stay focused on business results, mitigate technology and organizational risk, and deliver successful SOA projects. A Framework for SOA Governance (PDF, 58 pages, published: February 2012) Successful SOA requires a strong governance strategy that designs-in measurement, management, and enforcement procedures. Enterprise SOA adoption introduces new assets, processes, technologies, standards, roles, etc. which require application of appropriate governance policies and procedures. This document offers a framework for defining and building a proper SOA governance model. Determining ROI of SOA through Reuse (PDF, 28 pages, published: February 2012) SOA offers the opportunity to save millions of dollars annually through reuse. Sharing common services intuitively reduces workload, increases developer productivity, and decreases maintenance costs. This document provides an approach for estimating the reuse value of the various software assets contained in a typical portfolio. Identifying and Discovering Services (PDF, 64 pages, published: March 2012) What services should we build? How can we promote the reuse of existing services? A sound approach to answer these questions is a primary measure for the success of a SOA initiative. This document describes a pragmatic approach for collecting the necessary information for identifying proper services and facilitating service reuse. Software Engineering in an SOA Environment (PDF, 66 pages, published: March 2012) Traditional software delivery methods are too narrowly focused and need to be adjusted to enable SOA. This document describes an engineering approach for delivering projects within an SOA environment. It identifies the unique software engineering challenges faced by enterprises adopting SOA and provides a framework to remove the hurdles and improve the efficiency of the SOA initiative. SOA Reference Architectures SOA Foundation (PDF, 70 pages, published: February 2012) This document describes they key tenets for SOA design, development, and execution environments. Topics include: service definition, service layering, service types, the service model, composite applications, invocation patterns, and standards. SOA Infrastructure (PDF, 86 pages, published: February 2012) Properly architected, SOA provides a robust and manageable infrastructure that enables faster solution delivery. This document describes the role of infrastructure and its capabilities. Topics include: logical architecture, deployment views, and Oracle product mapping. SOA White Papers and Data Sheets Oracle's Approach to SOA (white paper) (PDF, 14 pages, published: February 2012) Oracle has developed a pragmatic, holistic approach, based on years of experience with numerous companies to help customers successfully adopt SOA and realize measureable business benefits. This executive datasheet and whitepaper describe Oracle's proven approach to SOA. Oracle's Approach to SOA (data sheet) (PDF, 3 pages, published: March 2012) SOA adoption is complex and success is far from assured. This is why Oracle has developed a pragmatic, holistic approach, based on years of experience with numerous companies, to help customers successfully adopt SOA and realize measurable business benefits. This data sheet provides an executive overview of Oracle's proven approach to SOA.

    Read the article

  • ArchBeat Link-o-Rama Top 10 for October 2012

    - by Bob Rhubart
    The Top 10 most popular items shared on the OTN ArchBeat Facebook Page for October 2012. OAM/OVD JVM Tuning | @FusionSecExpert Vinay from the Oracle Fusion Middleware Architecture Group (known as the A-Team) shares a process for analyzing and improving performance in Oracle Virtual Directory and Oracle Access Manager. SOA Galore: New Books for Technical Eyes Only Shake up up your technical skills with this trio of new technical books from community members covering SOA and BPM. Clustering ODI11g for High-Availability Part 1: Introduction and Architecture | Richard Yeardley "JEE agents can be deployed alongside, or instead of, standalone agents," says Rittman Meade's Richard Yeardley. "But there is one key advantage in using JEE agents and WebLogic – when you deploy JEE agents as part of a WebLogic cluster they can be configured together to form a high availability cluster." Learn more in Yeardley's extensive post. Solving Big Problems in Our 21st Century Information Society | Irving Wladawsky-Berger "I believe that the kind of extensive collaboration between the private sector, academia and government represented by the Internet revolution will be the way we will generally tackle big problems in the 21st century. Just as with the Internet, governments have a major role to play as the catalyst for many of the big projects that the private sector will then take forward and exploit. The need for high bandwidth, robust national broadband infrastructures is but one such example." -- Irving Wladawsky-Berger Eventually, 90% of tech budgets will be outside IT departments | ZDNet Another interesting post from ZDNet blogger Joe McKendrick about changing roles in IT. ADF Mobile - Login Functionality | Andrejus Baranovskis "The new ADF Mobile approach with native deployment is cool when you want to access phone functionality (camera, email, sms and etc.), also when you want to build mobile applications with advanced UI," reports Oracle ACE Director Andrejus Baranovskis. Podcast: Are You Future Proof? - Part 2 In Part 2, practicing architects and Oracle ACE Directors Ron Batra (AT&T), Basheer Khan (Innowave Technology), and Ronald van Luttikhuizen (Vennster) discuss re-tooling one’s skill set to reflect changes in enterprise IT, including the knowledge to steer stakeholders around the hype to what's truly valuable. ADF Mobile Custom Javascript — iFrame Injection | John Brunswick The ADF Mobile Framework provides a range of out of the box components to add within your AMX pages, according to John Brunswick. But what happens when "an out of the box component does not directly fulfill your development need? What options are available to extend your application interface?" John has an answer. Oracle Solaris 11.1 update focuses on database integration, cloud | Mark Fontecchio TechTarget editor Mark Fontecchio reports on the recent Oracle Solaris 11.1 release, with comments from IDC's Al Gillen. Architects Matter: Making sense of the people who make sense of enterprise IT Why do architects matter? Oracle Enterprise Architect Eric Stephens suggests that you ask yourself this question the next time you take the elevator to the Oracle offices on the 45th floor of the Willis Tower in Chicago, Illinois (or any other skyscraper, for that matter). If you had to take the stairs to get to those offices, who would you blame? "You get the picture," he says. "Architecture is essential for any necessarily complex structure, be it a building or an enterprise." (Read the article) Thought for the Day "I will contend that conceptual integrity is the most important consideration in system design. It is better to have a system omit certain anomalous features and improvements, but to reflect one set of design ideas, than to have one that contains many good but independent and uncoordinated ideas." — Frederick P. Brooks Source: SoftwareQuotes.com

    Read the article

  • OPN Specialized Latest News (15th November)

    - by swalker
    HELPING YOU TO SPECIALIZE WebCenter Implementation Specialist Exam Preparation Webcasts: WebCenter Content And WebCenter Portal Oracle Partner Network would like to invite you to Refresh Courses for WebCenter Content and WebCenter Portal, to help partners to prepare for the WebCenter Implementation Specialist EXAMS. This is a 3 hours intensive refresher partner-only training session, providing attendees with an overview of WebCenter Content and WebCenter Portal functions and related topics. After the refresher part you will be able to take the relevant Implementation Specialist EXAM depending on your personal focus. NOTE: This is only suitable for experienced WebCenter Content or WebCenter Portal practitioners Who should attend? Partner Consultants who want to become an Oracle WebCenter Content or a WebCenter Portal Certified Implementation Specialist or both, that will help them to differentiate themselves in front of customers and support their Companies to become Specialized. Webcast Details: Click here to read more... Specialized Partners Only! New Service to Promote Your Events The Partner Event Publisher has just been made available to all specialized partners in EMEA.  Partners now have the opportunity to publish their events to the Oracle.com/events site and spread the word on their upcoming live in-person and/or live webcast events. Click here to read more information and watch a short video demo. VADs Get Specialized Effective November 1, 2011 , VADs, with a valid Value Added Distributor Agreement will no longer be required to meet customer reference requirements outlined in the business criteria section in order to become specialized. VADs must continue meet all other business and competency criteria set forth in the applicable Knowledge Zone prior to specialization approval. New Certification Pillar Axiom 600 Storage System Your opportunity to take the Pillar Axiom 600 Storage System Essentials (1Z0-581) Exam is vailable now in beta. Pass the exam so you can become a Pillar Axiom 600 Storage Systems Implementation Specialist! Free vouchers are available for Oracle Partners! If you would like to receive a free Beta exam voucher, please send your request to [email protected] and include your name, business email address, company, and the Exam name Pillar Axiom 600 Storage System Essentials Beta exam. New Certification Available: Oracle Utilities Customer Care and Billing Oracle Utilities Customer Care and Billing 2 Essentials (1Z0-562) is a solution designed to help you meet market windows and regulatory deadlines while enjoying a low total cost of ownership and a high return on investment. Take the exam now to become an  Oracle Utilities Customer Care and Billing 2 Essentials Implementation Specialists. MEASURING YOUR SUCCESS We had 1674 Specialized Partners covering 5364 Specializations. Please note that due to OPN contract renewals at any given point in time there are valid Specialized Partners and Specializations which are temporarily not captured in the total statistics. An incremental 1961 individuals were accredited as Implementation Specialists giving an EMEA cumulative total of 9598 Implementation Specialists 26 ISVs obtained one or more Ready's, for a total of 53 Ready's Don't forget! You can submit your own press releases to Oracle! Every time you achieve specialization we'd like to support you getting the message out! Press guidelines and a submission link can be found on the OPN Portal here.

    Read the article

  • Reading Data from DDFS ValueError: No JSON object could be decoded

    - by secumind
    I'm running dozens of map reduce jobs for a number of different purposes using disco. My data has grown enormous and I thought I would try using DDFS for a change rather than standard txt files. I've followed the DISCO map/reduce example Counting Words as a map/reduce job, without to much difficulty and with the help of others, Reading JSON specific data into DISCO I've gotten past one of my latest problems. I'm trying to read data in/out of ddfs to better chunk and distribute it but am having a bit of trouble. Here's an example file: file.txt {"favorited": false, "in_reply_to_user_id": null, "contributors": null, "truncated": false, "text": "I'll call him back tomorrow I guess", "created_at": "Mon Feb 13 05:34:27 +0000 2012", "retweeted": false, "in_reply_to_status_id_str": null, "coordinates": null, "in_reply_to_user_id_str": null, "entities": {"user_mentions": [], "hashtags": [], "urls": []}, "in_reply_to_status_id": null, "id_str": "168931016843603968", "place": null, "user": {"follow_request_sent": null, "profile_use_background_image": true, "profile_background_image_url_https": "https://si0.twimg.com/profile_background_images/305726905/FASHION-3.png", "verified": false, "profile_image_url_https": "https://si0.twimg.com/profile_images/1818996723/image_normal.jpg", "profile_sidebar_fill_color": "292727", "is_translator": false, "id": 113532729, "profile_text_color": "000000", "followers_count": 78, "protected": false, "location": "With My Niggas In Paris!", "default_profile_image": false, "listed_count": 0, "utc_offset": -21600, "statuses_count": 6733, "description": "Made in CHINA., Educated && Making My Own $$. Fear GOD && Put Him 1st. #TeamFollowBack #TeamiPhone\n", "friends_count": 74, "profile_link_color": "b03f3f", "profile_image_url": "http://a2.twimg.com/profile_images/1818996723/image_normal.jpg", "notifications": null, "show_all_inline_media": false, "geo_enabled": true, "profile_background_color": "1f9199", "id_str": "113532729", "profile_background_image_url": "http://a3.twimg.com/profile_background_images/305726905/FASHION-3.png", "name": "Bee'Jay", "lang": "en", "profile_background_tile": true, "favourites_count": 19, "screen_name": "OohMyBEEsNice", "url": "http://www.bitchimpaid.org", "created_at": "Fri Feb 12 03:32:54 +0000 2010", "contributors_enabled": false, "time_zone": "Central Time (US & Canada)", "profile_sidebar_border_color": "000000", "default_profile": false, "following": null}, "in_reply_to_screen_name": null, "retweet_count": 0, "geo": null, "id": 168931016843603968, "source": "<a href=\"http://twitter.com/#!/download/iphone\" rel=\"nofollow\">Twitter for iPhone</a>"} {"favorited": false, "in_reply_to_user_id": 50940453, "contributors": null, "truncated": false, "text": "@LegaMrvica @MimozaBand makasi om artis :D kadoo kadoo", "created_at": "Mon Feb 13 05:34:27 +0000 2012", "retweeted": false, "in_reply_to_status_id_str": "168653037894770688", "coordinates": null, "in_reply_to_user_id_str": "50940453", "entities": {"user_mentions": [{"indices": [0, 11], "screen_name": "LegaMrvica", "id": 50940453, "name": "Lega_thePianis", "id_str": "50940453"}, {"indices": [12, 23], "screen_name": "MimozaBand", "id": 375128905, "name": "Mimoza", "id_str": "375128905"}], "hashtags": [], "urls": []}, "in_reply_to_status_id": 168653037894770688, "id_str": "168931016868761600", "place": null, "user": {"follow_request_sent": null, "profile_use_background_image": true, "profile_background_image_url_https": "https://si0.twimg.com/profile_background_images/347686061/Galungan_dan_Kuningan.jpg", "verified": false, "profile_image_url_https": "https://si0.twimg.com/profile_images/1803845596/Picture_20124_normal.jpg", "profile_sidebar_fill_color": "DDFFCC", "is_translator": false, "id": 48293450, "profile_text_color": "333333", "followers_count": 182, "protected": false, "location": "\u00dcT: -6.906799,107.622383", "default_profile_image": false, "listed_count": 0, "utc_offset": -28800, "statuses_count": 3052, "description": "Fashion design maranatha '11 // traditional dancer (bali) at sanggar tampak siring & Natya Nataraja", "friends_count": 206, "profile_link_color": "0084B4", "profile_image_url": "http://a3.twimg.com/profile_images/1803845596/Picture_20124_normal.jpg", "notifications": null, "show_all_inline_media": false, "geo_enabled": true, "profile_background_color": "9AE4E8", "id_str": "48293450", "profile_background_image_url": "http://a0.twimg.com/profile_background_images/347686061/Galungan_dan_Kuningan.jpg", "name": "nana afiff", "lang": "en", "profile_background_tile": true, "favourites_count": 2, "screen_name": "hasnfebria", "url": null, "created_at": "Thu Jun 18 08:50:29 +0000 2009", "contributors_enabled": false, "time_zone": "Pacific Time (US & Canada)", "profile_sidebar_border_color": "BDDCAD", "default_profile": false, "following": null}, "in_reply_to_screen_name": "LegaMrvica", "retweet_count": 0, "geo": null, "id": 168931016868761600, "source": "<a href=\"http://blackberry.com/twitter\" rel=\"nofollow\">Twitter for BlackBerry\u00ae</a>"} {"favorited": false, "in_reply_to_user_id": 27260086, "contributors": null, "truncated": false, "text": "@justinbieber u were born to be somebody, and u're super important in beliebers' life. thanks for all biebs. I love u. follow me? 84", "created_at": "Mon Feb 13 05:34:27 +0000 2012", "retweeted": false, "in_reply_to_status_id_str": null, "coordinates": null, "in_reply_to_user_id_str": "27260086", "entities": {"user_mentions": [{"indices": [0, 13], "screen_name": "justinbieber", "id": 27260086, "name": "Justin Bieber", "id_str": "27260086"}], "hashtags": [], "urls": []}, "in_reply_to_status_id": null, "id_str": "168931016856178688", "place": null, "user": {"follow_request_sent": null, "profile_use_background_image": true, "profile_background_image_url_https": "https://si0.twimg.com/profile_background_images/416005864/Captura.JPG", "verified": false, "profile_image_url_https": "https://si0.twimg.com/profile_images/1808883280/Captura6_normal.JPG", "profile_sidebar_fill_color": "f5e7f3", "is_translator": false, "id": 406750700, "profile_text_color": "333333", "followers_count": 1122, "protected": false, "location": "Adentro de una supra.", "default_profile_image": false, "listed_count": 0, "utc_offset": -14400, "statuses_count": 20966, "description": "Mi \u00eddolo es @justinbieber , si te gusta \u00a1genial!, si no, solo respetalo. El cambi\u00f3 mi vida completamente y mi sue\u00f1o es conocerlo #TrueBelieber . ", "friends_count": 1015, "profile_link_color": "9404b8", "profile_image_url": "http://a1.twimg.com/profile_images/1808883280/Captura6_normal.JPG", "notifications": null, "show_all_inline_media": false, "geo_enabled": false, "profile_background_color": "f9fcfa", "id_str": "406750700", "profile_background_image_url": "http://a3.twimg.com/profile_background_images/416005864/Captura.JPG", "name": "neversaynever,right?", "lang": "es", "profile_background_tile": false, "favourites_count": 22, "screen_name": "True_Belieebers", "url": "http://www.wehavebieber-fever.tumblr.com", "created_at": "Mon Nov 07 04:17:40 +0000 2011", "contributors_enabled": false, "time_zone": "Santiago", "profile_sidebar_border_color": "C0DEED", "default_profile": false, "following": null}, "in_reply_to_screen_name": "justinbieber", "retweet_count": 0, "geo": null, "id": 168931016856178688, "source": "<a href=\"http://yfrog.com\" rel=\"nofollow\">Yfrog</a>"} I load it into DDFS with: # ddfs chunk data:test1 ./file.txt created: disco://localhost/ddfs/vol0/blob/44/file_txt-0$549-db27b-125e1 I test that the file is indeed loaded into ddfs with: # ddfs xcat data:test1 {"favorited": false, "in_reply_to_user_id": null, "contributors": null, "truncated": false, "text": "I'll call him back tomorrow I guess", "created_at": "Mon Feb 13 05:34:27 +0000 2012", "retweeted": false, "in_reply_to_status_id_str": null, "coordinates": null, "in_reply_to_user_id_str": null, "entities": {"user_mentions": [], "hashtags": [], "urls": []}, "in_reply_to_status_id": null, "id_str": "168931016843603968", "place": null, "user": {"follow_request_sent": null, "profile_use_background_image": true, "profile_background_image_url_https": "https://si0.twimg.com/profile_background_images/305726905/FASHION-3.png", "verified": false, "profile_image_url_https": "https://si0.twimg.com/profile_images/1818996723/image_normal.jpg", "profile_sidebar_fill_color": "292727", "is_translator": false, "id": 113532729, "profile_text_color": "000000", "followers_count": 78, "protected": false, "location": "With My Niggas In Paris!", "default_profile_image": false, "listed_count": 0, "utc_offset": -21600, "statuses_count": 6733, "description": "Made in CHINA., Educated && Making My Own $$. Fear GOD && Put Him 1st. #TeamFollowBack #TeamiPhone\n", "friends_count": 74, "profile_link_color": "b03f3f", "profile_image_url": "http://a2.twimg.com/profile_images/1818996723/image_normal.jpg", "notifications": null, "show_all_inline_media": false, "geo_enabled": true, "profile_background_color": "1f9199", "id_str": "113532729", "profile_background_image_url": "http://a3.twimg.com/profile_background_images/305726905/FASHION-3.png", "name": "Bee'Jay", "lang": "en", "profile_background_tile": true, "favourites_count": 19, "screen_name": "OohMyBEEsNice", "url": "http://www.bitchimpaid.org", "created_at": "Fri Feb 12 03:32:54 +0000 2010", "contributors_enabled": false, "time_zone": "Central Time (US & Canada)", "profile_sidebar_border_color": "000000", "default_profile": false, "following": null}, "in_reply_to_screen_name": null, "retweet_count": 0, "geo": null, "id": 168931016843603968, "source": "<a href=\"http://twitter.com/#!/download/iphone\" rel=\"nofollow\">Twitter for iPhone</a>"} {"favorited": false, "in_reply_to_user_id": 50940453, "contributors": null, "truncated": false, "text": "@LegaMrvica @MimozaBand makasi om artis :D kadoo kadoo", "created_at": "Mon Feb 13 05:34:27 +0000 2012", "retweeted": false, "in_reply_to_status_id_str": "168653037894770688", "coordinates": null, "in_reply_to_user_id_str": "50940453", "entities": {"user_mentions": [{"indices": [0, 11], "screen_name": "LegaMrvica", "id": 50940453, "name": "Lega_thePianis", "id_str": "50940453"}, {"indices": [12, 23], "screen_name": "MimozaBand", "id": 375128905, "name": "Mimoza", "id_str": "375128905"}], "hashtags": [], "urls": []}, "in_reply_to_status_id": 168653037894770688, "id_str": "168931016868761600", "place": null, "user": {"follow_request_sent": null, "profile_use_background_image": true, "profile_background_image_url_https": "https://si0.twimg.com/profile_background_images/347686061/Galungan_dan_Kuningan.jpg", "verified": false, "profile_image_url_https": "https://si0.twimg.com/profile_images/1803845596/Picture_20124_normal.jpg", "profile_sidebar_fill_color": "DDFFCC", "is_translator": false, "id": 48293450, "profile_text_color": "333333", "followers_count": 182, "protected": false, "location": "\u00dcT: -6.906799,107.622383", "default_profile_image": false, "listed_count": 0, "utc_offset": -28800, "statuses_count": 3052, "description": "Fashion design maranatha '11 // traditional dancer (bali) at sanggar tampak siring & Natya Nataraja", "friends_count": 206, "profile_link_color": "0084B4", "profile_image_url": "http://a3.twimg.com/profile_images/1803845596/Picture_20124_normal.jpg", "notifications": null, "show_all_inline_media": false, "geo_enabled": true, "profile_background_color": "9AE4E8", "id_str": "48293450", "profile_background_image_url": "http://a0.twimg.com/profile_background_images/347686061/Galungan_dan_Kuningan.jpg", "name": "nana afiff", "lang": "en", "profile_background_tile": true, "favourites_count": 2, "screen_name": "hasnfebria", "url": null, "created_at": "Thu Jun 18 08:50:29 +0000 2009", "contributors_enabled": false, "time_zone": "Pacific Time (US & Canada)", "profile_sidebar_border_color": "BDDCAD", "default_profile": false, "following": null}, "in_reply_to_screen_name": "LegaMrvica", "retweet_count": 0, "geo": null, "id": 168931016868761600, "source": "<a href=\"http://blackberry.com/twitter\" rel=\"nofollow\">Twitter for BlackBerry\u00ae</a>"} {"favorited": false, "in_reply_to_user_id": 27260086, "contributors": null, "truncated": false, "text": "@justinbieber u were born to be somebody, and u're super important in beliebers' life. thanks for all biebs. I love u. follow me? 84", "created_at": "Mon Feb 13 05:34:27 +0000 2012", "retweeted": false, "in_reply_to_status_id_str": null, "coordinates": null, "in_reply_to_user_id_str": "27260086", "entities": {"user_mentions": [{"indices": [0, 13], "screen_name": "justinbieber", "id": 27260086, "name": "Justin Bieber", "id_str": "27260086"}], "hashtags": [], "urls": []}, "in_reply_to_status_id": null, "id_str": "168931016856178688", "place": null, "user": {"follow_request_sent": null, "profile_use_background_image": true, "profile_background_image_url_https": "https://si0.twimg.com/profile_background_images/416005864/Captura.JPG", "verified": false, "profile_image_url_https": "https://si0.twimg.com/profile_images/1808883280/Captura6_normal.JPG", "profile_sidebar_fill_color": "f5e7f3", "is_translator": false, "id": 406750700, "profile_text_color": "333333", "followers_count": 1122, "protected": false, "location": "Adentro de una supra.", "default_profile_image": false, "listed_count": 0, "utc_offset": -14400, "statuses_count": 20966, "description": "Mi \u00eddolo es @justinbieber , si te gusta \u00a1genial!, si no, solo respetalo. El cambi\u00f3 mi vida completamente y mi sue\u00f1o es conocerlo #TrueBelieber . ", "friends_count": 1015, "profile_link_color": "9404b8", "profile_image_url": "http://a1.twimg.com/profile_images/1808883280/Captura6_normal.JPG", "notifications": null, "show_all_inline_media": false, "geo_enabled": false, "profile_background_color": "f9fcfa", "id_str": "406750700", "profile_background_image_url": "http://a3.twimg.com/profile_background_images/416005864/Captura.JPG", "name": "neversaynever,right?", "lang": "es", "profile_background_tile": false, "favourites_count": 22, "screen_name": "True_Belieebers", "url": "http://www.wehavebieber-fever.tumblr.com", "created_at": "Mon Nov 07 04:17:40 +0000 2011", "contributors_enabled": false, "time_zone": "Santiago", "profile_sidebar_border_color": "C0DEED", "default_profile": false, "following": null}, "in_reply_to_screen_name": "justinbieber", "retweet_count": 0, "geo": null, "id": 168931016856178688, "source": "<a href=\"http://yfrog.com\" rel=\"nofollow\">Yfrog</a> At this point everything is great, I load up the script that resulted from a previous Stack Post: from disco.core import Job, result_iterator import gzip def map(line, params): import unicodedata import json r = json.loads(line).get('text') s = unicodedata.normalize('NFD', r).encode('ascii', 'ignore') for word in s.split(): yield word, 1 def reduce(iter, params): from disco.util import kvgroup for word, counts in kvgroup(sorted(iter)): yield word, sum(counts) if __name__ == '__main__': job = Job().run(input=["tag://data:test1"], map=map, reduce=reduce) for word, count in result_iterator(job.wait(show=True)): print word, count NOTE: That this script runs file if the input=["file.txt"], however when I run it with "tag://data:test1" I get the following error: # DISCO_EVENTS=1 python count_normal_words.py Job@549:db30e:25bd8: Status: [map] 0 waiting, 1 running, 0 done, 0 failed 2012/11/25 21:43:26 master New job initialized! 2012/11/25 21:43:26 master Starting job 2012/11/25 21:43:26 master Starting map phase 2012/11/25 21:43:26 master map:0 assigned to solice 2012/11/25 21:43:26 master ERROR: Job failed: Worker at 'solice' died: Traceback (most recent call last): File "/home/DISCO/data/solice/01/Job@549:db30e:25bd8/usr/local/lib/python2.7/site-packages/disco/worker/__init__.py", line 329, in main job.worker.start(task, job, **jobargs) File "/home/DISCO/data/solice/01/Job@549:db30e:25bd8/usr/local/lib/python2.7/site-packages/disco/worker/__init__.py", line 290, in start self.run(task, job, **jobargs) File "/home/DISCO/data/solice/01/Job@549:db30e:25bd8/usr/local/lib/python2.7/site-packages/disco/worker/classic/worker.py", line 286, in run getattr(self, task.mode)(task, params) File "/home/DISCO/data/solice/01/Job@549:db30e:25bd8/usr/local/lib/python2.7/site-packages/disco/worker/classic/worker.py", line 299, in map for key, val in self['map'](entry, params): File "count_normal_words.py", line 12, in map File "/usr/lib64/python2.7/json/__init__.py", line 326, in loads return _default_decoder.decode(s) File "/usr/lib64/python2.7/json/decoder.py", line 366, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/usr/lib64/python2.7/json/decoder.py", line 384, in raw_decode raise ValueError("No JSON object could be decoded") ValueError: No JSON object could be decoded 2012/11/25 21:43:26 master WARN: Job killed Status: [map] 1 waiting, 0 running, 0 done, 1 failed Traceback (most recent call last): File "count_normal_words.py", line 28, in <module> for word, count in result_iterator(job.wait(show=True)): File "/usr/local/lib/python2.7/site-packages/disco/core.py", line 348, in wait timeout, poll_interval * 1000) File "/usr/local/lib/python2.7/site-packages/disco/core.py", line 309, in check_results raise JobError(Job(name=jobname, master=self), "Status %s" % status) disco.error.JobError: Job Job@549:db30e:25bd8 failed: Status dead The Error states: ValueError: No JSON object could be decoded. Again, this works fine using the text file as input but now DDFS. Any ideas, I'm open to suggestions?

    Read the article

  • e2fsck extremely slow, although enough memory exists

    - by kaefert
    I've got this external USB-Disk: kaefert@blechmobil:~$ lsusb -s 2:3 Bus 002 Device 003: ID 0bc2:3320 Seagate RSS LLC As can be seen in this dmesg output, there is some problem that prevents that disk from beeing mounted: kaefert@blechmobil:~$ dmesg ... [ 113.084079] usb 2-1: new high-speed USB device number 3 using ehci_hcd [ 113.217783] usb 2-1: New USB device found, idVendor=0bc2, idProduct=3320 [ 113.217787] usb 2-1: New USB device strings: Mfr=2, Product=3, SerialNumber=1 [ 113.217790] usb 2-1: Product: Expansion Desk [ 113.217792] usb 2-1: Manufacturer: Seagate [ 113.217794] usb 2-1: SerialNumber: NA4J4N6K [ 113.435404] usbcore: registered new interface driver uas [ 113.455315] Initializing USB Mass Storage driver... [ 113.468051] scsi5 : usb-storage 2-1:1.0 [ 113.468180] usbcore: registered new interface driver usb-storage [ 113.468182] USB Mass Storage support registered. [ 114.473105] scsi 5:0:0:0: Direct-Access Seagate Expansion Desk 070B PQ: 0 ANSI: 6 [ 114.474342] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.475089] sd 5:0:0:0: [sdb] Write Protect is off [ 114.475092] sd 5:0:0:0: [sdb] Mode Sense: 43 00 00 00 [ 114.475959] sd 5:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 114.477093] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.501649] sdb: sdb1 [ 114.502717] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.504354] sd 5:0:0:0: [sdb] Attached SCSI disk [ 116.804408] EXT4-fs (sdb1): ext4_check_descriptors: Checksum for group 3976 failed (47397!=61519) [ 116.804413] EXT4-fs (sdb1): group descriptors corrupted! ... So I went and fired up my favorite partition manager - gparted, and told it to verify and repair the partition sdb1. This made gparted call e2fsck (version 1.42.4 (12-Jun-2012)) e2fsck -f -y -v /dev/sdb1 Although gparted called e2fsck with the "-v" option, sadly it doesn't show me the output of my e2fsck process (bugreport https://bugzilla.gnome.org/show_bug.cgi?id=467925 ) I started this whole thing on Sunday (2012-11-04_2200) evening, so about 48 hours ago, this is what htop says about it now (2012-11-06-1900): PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command 3704 root 39 19 1560M 1166M 768 R 98.0 19.5 42h56:43 e2fsck -f -y -v /dev/sdb1 Now I found a few posts on the internet that discuss e2fsck running slow, for example: http://gparted-forum.surf4.info/viewtopic.php?id=13613 where they write that its a good idea to see if the disk is just that slow because maybe its damaged, and I think these outputs tell me that this is not the case in my case: kaefert@blechmobil:~$ sudo hdparm -tT /dev/sdb /dev/sdb: Timing cached reads: 3562 MB in 2.00 seconds = 1783.29 MB/sec Timing buffered disk reads: 82 MB in 3.01 seconds = 27.26 MB/sec kaefert@blechmobil:~$ sudo hdparm /dev/sdb /dev/sdb: multcount = 0 (off) readonly = 0 (off) readahead = 256 (on) geometry = 364801/255/63, sectors = 5860533160, start = 0 However, although I can read quickly from that disk, this disk speed doesn't seem to be used by e2fsck, considering tools like gkrellm or iotop or this: kaefert@blechmobil:~$ iostat -x Linux 3.2.0-2-amd64 (blechmobil) 2012-11-06 _x86_64_ (2 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 14,24 47,81 14,63 0,95 0,00 22,37 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0,59 8,29 2,42 5,14 43,17 160,17 53,75 0,30 39,80 8,72 54,42 3,95 2,99 sdb 137,54 5,48 9,23 0,20 587,07 22,73 129,35 0,07 7,70 7,51 16,18 2,17 2,04 Now I researched a little bit on how to find out what e2fsck is doing with all that processor time, and I found the tool strace, which gives me this: kaefert@blechmobil:~$ sudo strace -p3704 lseek(4, 41026998272, SEEK_SET) = 41026998272 write(4, "\212\354K[_\361\3nl\212\245\352\255jR\303\354\312Yv\334p\253r\217\265\3567\325\257\3766"..., 4096) = 4096 lseek(4, 48404766720, SEEK_SET) = 48404766720 read(4, "\7t\260\366\346\337\304\210\33\267j\35\377'\31f\372\252\ffU\317.y\211\360\36\240c\30`\34"..., 4096) = 4096 lseek(4, 41027002368, SEEK_SET) = 41027002368 write(4, "\232]7Ws\321\352\t\1@[+5\263\334\276{\343zZx\352\21\316`1\271[\202\350R`"..., 4096) = 4096 lseek(4, 48404770816, SEEK_SET) = 48404770816 read(4, "\17\362r\230\327\25\346//\210H\v\311\3237\323K\304\306\361a\223\311\324\272?\213\tq \370\24"..., 4096) = 4096 lseek(4, 41027006464, SEEK_SET) = 41027006464 write(4, "\367yy>x\216?=\324Z\305\351\376&\25\244\210\271\22\306}\276\237\370(\214\205G\262\360\257#"..., 4096) = 4096 lseek(4, 48404774912, SEEK_SET) = 48404774912 read(4, "\365\25\0\21|T\0\21}3t_\272\373\222k\r\177\303\1\201\261\221$\261B\232\3142\21U\316"..., 4096) = 4096 ^CProcess 3704 detached around 16 of these lines every second, so 4 read and 4 write operations every second, which I don't consider to be a lot.. And finally, my question: Will this process ever finish? If those numbers from fseek (48404774912) represent bytes, that would be something like 45 gigabytes, with this beeing a 3 terrabyte disk, which would give me 134 days to go, if the speed stays constant, and e2fsck scans the disk like this completly and only once. Do you have some advice for me? I have most of the data on that disk elsewhere, but I've put a lot of hours into sorting and merging it to this disk, so I would prefer to getting this disk up and running again, without formatting it anew. I don't think that the hardware is damaged since the disk is only a few months and since I can't see any I/O errors in the dmesg output. UPDATE: I just looked at the strace output again (2012-11-06_2300), now it looks like this: lseek(4, 1419860611072, SEEK_SET) = 1419860611072 read(4, "3#\f\2447\335\0\22A\355\374\276j\204'\207|\217V|\23\245[\7VP\251\242\276\207\317:"..., 4096) = 4096 lseek(4, 43018145792, SEEK_SET) = 43018145792 write(4, "]\206\231\342Y\204-2I\362\242\344\6R\205\361\324\177\265\317C\334V\324\260\334\275t=\10F."..., 4096) = 4096 lseek(4, 1419860615168, SEEK_SET) = 1419860615168 read(4, "\262\305\314Y\367\37x\326\245\226\226\320N\333$s\34\204\311\222\7\315\236\336\300TK\337\264\236\211n"..., 4096) = 4096 lseek(4, 43018149888, SEEK_SET) = 43018149888 write(4, "\271\224m\311\224\25!I\376\16;\377\0\223H\25Yd\201Y\342\r\203\271\24eG<\202{\373V"..., 4096) = 4096 lseek(4, 1419860619264, SEEK_SET) = 1419860619264 read(4, ";d\360\177\n\346\253\210\222|\250\352T\335M\33\260\320\261\7g\222P\344H?t\240\20\2548\310"..., 4096) = 4096 lseek(4, 43018153984, SEEK_SET) = 43018153984 write(4, "\360\252j\317\310\251G\227\335{\214`\341\267\31Y\202\360\v\374\307oq\3063\217Z\223\313\36D\211"..., 4096) = 4096 So the numbers in the lseek lines before the reads, like 1419860619264 are already a lot bigger, standing for 1.29 terabytes if those numbers are bytes, so it doesn't seem to be a linear progress on a big scale, maybe there are only some areas that need work, that have big gaps in between them. UPDATE2: Okey, big disappointment, the numbers are back to very small again (2012-11-07_0720) lseek(4, 52174548992, SEEK_SET) = 52174548992 read(4, "\374\312\22\\\325\215\213\23\0357U\222\246\370v^f(\312|f\212\362\343\375\373\342\4\204mU6"..., 4096) = 4096 lseek(4, 46603526144, SEEK_SET) = 46603526144 write(4, "\370\261\223\227\23?\4\4\217\264\320_Am\246CQ\313^\203U\253\274\204\277\2564n\227\177\267\343"..., 4096) = 4096 so either e2fsck goes over the data multiple times, or it just hops back and forth multiple times. Or my assumption that those numbers are bytes is wrong. UPDATE3: Since it's mentioned here http://forums.fedoraforum.org/showthread.php?t=282125&page=2 that you can testisk while e2fsck is running, i tried that, though not with a lot of success. When asking testdisk to display the data of my partition, this is what I get: TestDisk 6.13, Data Recovery Utility, November 2011 Christophe GRENIER <[email protected]> http://www.cgsecurity.org 1 P Linux 0 4 5 45600 40 8 732566272 Can't open filesystem. Filesystem seems damaged. And this is what strace currently gives me (2012-11-07_1030) lseek(4, 212460343296, SEEK_SET) = 212460343296 read(4, "\315Mb\265v\377Gn \24\f\205EHh\2349~\330\273\203\3375\206\10\r3=W\210\372\352"..., 4096) = 4096 lseek(4, 47347830784, SEEK_SET) = 47347830784 write(4, "]\204\223\300I\357\4\26\33+\243\312G\230\250\371*m2U\t_\215\265J \252\342Pm\360D"..., 4096) = 4096 (times are in CET)

    Read the article

  • Oracle Data Provider and casting

    - by mrjoltcola
    I use Oracle's specific data provider, not the Microsoft provider that is being discontinued. The thing I've found about ODP.NET is how picky it is with data types. Where JDBC and other ADO providers just convert and make things work, ODP.NET will throw an invalid cast exception unless you get it exactly right. Consider this code: String strSQL = "SELECT DOCUMENT_SEQ.NEXTVAL FROM DUAL"; OracleCommand cmd = new OracleCommand(strSQL, conn); reader = cmd.ExecuteReader(); if (reader != null && reader.Read()) { Int64 id = reader.GetInt64(0); return id; } Due to ODP.NET's pickiness on conversion, this doesn't work. My usual options are: 1) Retrieve into a Decimal and return it with a cast to an Int64 (I don't like this because Decimal is just overkill, and at least once I remember reading it was deprecated...) Decimal id = reader.GetDecimal(0); return (Int64)id; 2) Or cast in the SQL statement to make sure it fits into Int64, like NUMBER(18) String strSQL = "SELECT CAST(DOCUMENT_SEQ.NEXTVAL AS NUMBER(18)) FROM DUAL"; I do (2), because I feel its just not clean pulling a number into a .NET Decimal when my domain types are Int32 or Int64. Other providers I've used are nice (smart) enough to do the conversion on the fly. Any suggestions from the ODP.NET gurus?

    Read the article

  • Calling an Oracle PL/SQL procedure with Custom Object return types from 0jdbc6 JDBCthin drivers

    - by Andrew Harmel-Law
    I'm writing some JDBC code which calls a Oracle 11g PL/SQL procdedure which has a Custom Object return type. Whenever I try an register my return types, I get either ORA-03115 or PLS-00306 as an error when the statement is executed depending on the type I set. An example is below: PLSQL Code: Procedure GetDataSummary (p_my_key IN KEYS.MY_KEY%TYPE, p_recordset OUT data_summary_tab, p_status OUT VARCHAR2); Java Code: String query = "beginmanageroleviewdata.getdatasummary(?, ?, ?); end;"); CallableStatement stmt = conn.prepareCall(query); stmt.setInt(1, 83); stmt.registerOutParameter(2, OracleTypes.CURSOR); // Causes error: PLS-00306 stmt.registerOutParameter(3, OracleTypes.VARCHAR); stmt.execute(stmt); // Error mentioned above thrown here. Can anyone provide me with an example showing how I can do this? I guess it's possible. However I can't see a rowset OracleType. CURSOR, REF, DATALINK, and more fail. Apologies if this is a dumb question. I'm not a PL/SQL expert and may have used the wrong terminology in some areas of my question. (If so, please edit me). Thanks in advance. Regs, Andrew

    Read the article

  • NHibernate Oracle stored procedure problem

    - by Mr. Flint
    ------Using VS2008, ASP.Net with C#, Oracle, NHibernate---- I have tested my stored procedure. It's working but not with NHibernate. Here are the codes: Procedure : create or replace procedure ThanaDelete (id number) as begin delete from thana_tbl where thana_code = id; end Mapping File: <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="DataTransfer" namespace="DataTransfer"> <class name="DataTransfer.Models.Thana, DataTransfer" table="THANA_TBL"> <id name="THANA_CODE" column="THANA_CODE" type="Int32" unsaved-value="0"> <generator class="native"> <param name="sequence"> SEQ_TEST </param> </generator> </id> <property name="THANA_NAME" column="THANA_NAME" type="string" not-null="false"/> <property name="DISTRICT_CODE" column="DISTRICT_CODE" type="Int32" not-null="false"/> <property name="USER_ID" column="USER_ID" type="string" not-null="false"/> <property name="TRANSACTION_DATE" column="TRANSACTION_DATE" type="Date" not-null="false"/> <property name="TRANSACTION_TIME" column="TRANSACTION_TIME" type="string" not-null="false"/> <sql-delete>exec THANADELETE ? </sql-delete> </class> </hibernate-mapping> error: Message: could not delete: [DataTransfer.Models.Thana#10][SQL: exec THANADELETE ?] Source: NHibernate Inner Exception System.Data.OracleClient.OracleException Message: ORA-00900: invalid SQL statement

    Read the article

  • why DataColumn AllowDbNull is true even if oracle db does not allow null

    - by matti
    Hi. I have column SomeId in table SomeLink. When I look with tOra or Sql Plus Worksheet both state: tOra: Column name Data type Default Null Comment SOMEID INTEGER {null} NOT NULL {null} Sql Plus: SOMEID NOT NULL NUMBER(38) I have authored a method that's intended to give default values to all NOT NULL fields that don't have values: public static void GetDefaultValuesForNonNullColumns(DataRow row) { foreach(DataColumn col in row.Table.Columns) { if (Convert.IsDBNull(row[col]) && !col.AllowDBNull) { if (ColumnIsNumeric(col.DataType)) row[col] = 0; else if (col.DataType == typeof(DateTime)) row[col] = DateTime.Now; else if (col.DataType == typeof(String)) row[col] = string.Empty; else if (col.DataType == typeof(Char)) row[col] = ' '; else throw new Exception(string.Format("Unsupported column type: {0}", col.DataType)); } } } When SOMEID is handled in loop the AllowDBNull = true. I really can't understand. The table is created in DataSet like this: _someLinkAdptr = _dbFactory.CreateDataAdapter(); _someLinkAdptr.SelectCommand = _dbFactory.CreateCommand(); _someLinkAdptr.SelectCommand.Connection = _cnctn; _someLinkAdptr.SelectCommand.CommandText = GetSomeLinkSelectTxtAndParams(_someLinkAdptr.SelectCommand, UndefinedValue.ToString(), UndefinedValue.ToString()); Select command returns no rows. The idea is that I can then use commandbuilder to get InsertCommand without building it myself. The row is added to dataset's table like this: private static void CreateDocLink(int anId, int anotherId) { DataRow row = _someDataSet.Tables["SomeLink"].NewRow(); row["AnId"] = anId; row["AnotherId"] = anotherId; Utility.GetDefaultValuesForNonNullColumns(row); _someDataSet.Tables["SomeLink"].Rows.Add(row); } When DataAdapter is updated to oracle db I get: ORA-01400: cannot insert NULL into (SOMESCHEMA.SOMELINK.SOMEID) Cheers & BR -Matti

    Read the article

  • Getting wierd issue with TO_NUMBER function in Oracle

    - by Fazal
    I have been getting an intermittent issue when executing to_number function in the where clause on a varchar2 column if number of records exceed a certain number n. I used n as there is no exact number of records on which it happens. On one DB it happens after n was 1 million on another when it was 0.1. million. E.g. I have a table with 10 million records say Table Country which has field1 varchar2 containing numberic data and Id If I do a query as an example select * from country where to_number(field1) = 23 and id 1 and id < 100000 This works But if i do the query select * from country where to_number(field1) = 23 and id 1 and id < 100001 It fails saying invalid number Next I try the query select * from country where to_number(field1) = 23 and id 2 and id < 100001 It works again As I only got invalid number it was confusing, but in the log file it said Memory Notification: Library Cache Object loaded into SGA Heap size 3823K exceeds notification threshold (2048K) KGL object name :with sqlplan as ( select c006 object_owner, c007 object_type,c008 object_name from htmldb_collections where COLLECTION_NAME='HTMLDB_QUERY_PLAN' and c007 in ('TABLE','INDEX','MATERIALIZED VIEW','INDEX (UNIQUE)')), ws_schemas as( select schema from wwv_flow_company_schemas where security_group_id = :flow_security_group_id), t as( select s.object_owner table_owner,s.object_name table_name, d.OBJECT_ID from sqlplan s,sys.dba_objects d It seems its related to SGA size, but google did not give me much help on this. Does anyone have any idea about this issue with TO_NUMBER or oracle functions for large data?

    Read the article

  • Error while converting function from oracle to SQL Server

    - by sss
    Hi, I am migrating a function from Oracle to SQL Server 2008. This function raises SELECT statements included within a function cannot return data to a client as error. How can I solve this problem? Original PLSQL Code CREATE OR REPLACE function f_birim_cevrim_katsayi (p_ID_MAMUL in number, p_ID_BIRIMDEN in number, p_ID_BIRIME in number) return number is v_katsayi number; begin v_katsayi:=0; if p_ID_BIRIMDEN!=p_ID_BIRIME then for c in ( select * from CR_BIRIM_CEVRIM where ID_MAMUL = p_ID_MAMUL and ( (ID_BIRIM = p_ID_BIRIMDEN and ID_BIRIM2 = p_ID_BIRIME) OR ( ID_BIRIM2 = p_ID_BIRIMDEN and ID_BIRIM = p_ID_BIRIME) ) and VALID = 1) loop if c.ID_BIRIM=p_ID_BIRIMDEN then v_katsayi:=c.MT_ORAN; else v_katsayi:=1/c.MT_ORAN; end if; end loop; else v_katsayi:=1; end if; return round(v_katsayi,10); exception when others then return 0; end; T-SQL code: If Exists ( SELECT name FROM sysobjects WHERE name = 'f_birim_cevrim_katsayi' AND type = 'FN') DROP FUNCTION f_birim_cevrim_katsayi GO CREATE FUNCTION f_birim_cevrim_katsayi ( @p_ID_MAMUL FLOAT , @p_ID_BIRIMDEN FLOAT , @p_ID_BIRIME FLOAT ) RETURNS float AS BEGIN DECLARE @adv_error INT DECLARE @v_katsayi FLOAT SELECT @v_katsayi = 0 IF @p_ID_BIRIMDEN != @p_ID_BIRIME BEGIN DECLARE cursor_for_inline_select1 CURSOR LOCAL FOR SELECT * FROM CR_BIRIM_CEVRIM WHERE ID_MAMUL = @p_ID_MAMUL AND ((ID_BIRIM = @p_ID_BIRIMDEN AND ID_BIRIM2 = @p_ID_BIRIME) OR (ID_BIRIM2 = @p_ID_BIRIMDEN AND ID_BIRIM = @p_ID_BIRIME)) AND VALID = 1 OPEN cursor_for_inline_select1 FETCH NEXT FROM cursor_for_inline_select1 WHILE (@@FETCH_STATUS <> -1) BEGIN IF c.ID_BIRIM = @p_ID_BIRIMDEN BEGIN SELECT @v_katsayi = c.MT_ORAN END ELSE BEGIN SELECT @v_katsayi = 1/c.MT_ORAN END END CLOSE cursor_for_inline_select1 DEALLOCATE cursor_for_inline_select1 END ELSE BEGIN SELECT @v_katsayi = 1 END DEALLOCATE cursor_for_inline_select1 return ROUND(@v_katsayi, 10) GOTO ExitLabel1 Exception1: BEGIN DEALLOCATE cursor_for_inline_select1 return 0 END ExitLabel1: return ROUND(@v_katsayi, 10) END GO

    Read the article

  • Error while converting function from oracle to mssql

    - by sss
    Hi, I am migrating a function from oracle to mssql 2008.This function raises Select statements included within a function cannot return data to a client as error.How can i solve this problem? Original PLSQL Code CREATE OR REPLACE function f_birim_cevrim_katsayi (p_ID_MAMUL in number, p_ID_BIRIMDEN in number, p_ID_BIRIME in number) return number is v_katsayi number; begin v_katsayi:=0; if p_ID_BIRIMDEN!=p_ID_BIRIME then for c in ( select * from CR_BIRIM_CEVRIM where ID_MAMUL = p_ID_MAMUL and ( (ID_BIRIM = p_ID_BIRIMDEN and ID_BIRIM2 = p_ID_BIRIME) OR ( ID_BIRIM2 = p_ID_BIRIMDEN and ID_BIRIM = p_ID_BIRIME) ) and VALID = 1) loop if c.ID_BIRIM=p_ID_BIRIMDEN then v_katsayi:=c.MT_ORAN; else v_katsayi:=1/c.MT_ORAN; end if; end loop; else v_katsayi:=1; end if; return round(v_katsayi,10); exception when others then return 0; end; TSQL CODE If Exists ( SELECT name FROM sysobjects WHERE name = 'f_birim_cevrim_katsayi' AND type = 'FN') DROP FUNCTION f_birim_cevrim_katsayi GO CREATE FUNCTION f_birim_cevrim_katsayi ( @p_ID_MAMUL FLOAT , @p_ID_BIRIMDEN FLOAT , @p_ID_BIRIME FLOAT ) RETURNS float AS BEGIN DECLARE @adv_error INT DECLARE @v_katsayi FLOAT SELECT @v_katsayi = 0 IF @p_ID_BIRIMDEN != @p_ID_BIRIME BEGIN DECLARE cursor_for_inline_select1 CURSOR LOCAL FOR SELECT * FROM CR_BIRIM_CEVRIM WHERE ID_MAMUL = @p_ID_MAMUL AND ((ID_BIRIM = @p_ID_BIRIMDEN AND ID_BIRIM2 = @p_ID_BIRIME) OR (ID_BIRIM2 = @p_ID_BIRIMDEN AND ID_BIRIM = @p_ID_BIRIME)) AND VALID = 1 OPEN cursor_for_inline_select1 FETCH NEXT FROM cursor_for_inline_select1 WHILE (@@FETCH_STATUS <> -1) BEGIN IF c.ID_BIRIM = @p_ID_BIRIMDEN BEGIN SELECT @v_katsayi = c.MT_ORAN END ELSE BEGIN SELECT @v_katsayi = 1/c.MT_ORAN END END CLOSE cursor_for_inline_select1 DEALLOCATE cursor_for_inline_select1 END ELSE BEGIN SELECT @v_katsayi = 1 END DEALLOCATE cursor_for_inline_select1 return ROUND(@v_katsayi, 10) GOTO ExitLabel1 Exception1: BEGIN DEALLOCATE cursor_for_inline_select1 return 0 END ExitLabel1: return ROUND(@v_katsayi, 10) END GO

    Read the article

  • Create an Oracle function that returns a table

    - by Craig
    I'm trying to create a function in package that returns a table. I hope to call the function once in the package, but be able to re-use its data mulitple times. While I know I create temp tables in Oracle, I was hoping to keep things DRY. So far, this is what I have: Header: CREATE OR REPLACE PACKAGE TEST AS TYPE MEASURE_RECORD IS RECORD ( L4_ID VARCHAR2(50), L6_ID VARCHAR2(50), L8_ID VARCHAR2(50), YEAR NUMBER, PERIOD NUMBER, VALUE NUMBER ); TYPE MEASURE_TABLE IS TABLE OF MEASURE_RECORD; FUNCTION GET_UPS( TIMESPAN_IN IN VARCHAR2 DEFAULT 'MONTLHY', STARTING_DATE_IN DATE, ENDING_DATE_IN DATE ) RETURN MEASURE_TABLE; END TEST; Body: CREATE OR REPLACE PACKAGE BODY TEST AS FUNCTION GET_UPS ( TIMESPAN_IN IN VARCHAR2 DEFAULT 'MONTLHY', STARTING_DATE_IN DATE, ENDING_DATE_IN DATE ) RETURN MEASURE_TABLE IS T MEASURE_TABLE; BEGIN SELECT ... INTO T FROM ... ; RETURN T; END GET_UPS; END TEST; The header compiles, the body does not. One error message is 'not enough values', which probably means that I should be selecting into the MEASURE_RECORD, rather than the MEASURE_TABLE. What am I missing?

    Read the article

  • Unlocking Productivity

    - by Michael Snow
    Unlocking Productivity in Life Sciences with Consolidated Content Management by Joe Golemba, Vice President, Product Management, Oracle WebCenter As life sciences organizations look to become more operationally efficient, the ability to effectively leverage information is a competitive advantage. Whether data mining at the drug discovery phase or prepping the sales team before a product launch, content management can play a key role in developing, organizing, and disseminating vital information. The goal of content management is relatively straightforward: put the information that people need where they can find it. A number of issues can complicate this; information sits in many different systems, each of those systems has its own security, and the information in those systems exists in many different formats. Identifying and extracting pertinent information from mountains of farflung data is no simple job, but the alternative—wasted effort or even regulatory compliance issues—is worse. An integrated information architecture can enable health sciences organizations to make better decisions, accelerate clinical operations, and be more competitive. Unstructured data matters Often when we think of drug development data, we think of structured data that fits neatly into one or more research databases. But structured data is often directly supported by unstructured data such as experimental protocols, reaction conditions, lot numbers, run times, analyses, and research notes. As life sciences companies seek integrated views of data, they are typically finding diverse islands of data that seemingly have no relationship to other data in the organization. Information like sales reports or call center reports can be locked into siloed systems, and unavailable to the discovery process. Additionally, in the increasingly networked clinical environment, Web pages, instant messages, videos, scientific imaging, sales and marketing data, collaborative workspaces, and predictive modeling data are likely to be present within an organization, and each source potentially possesses information that can help to better inform specific efforts. Historically, content management solutions that had 21CFR Part 11 capabilities—electronic records and signatures—were focused mainly on content-enabling manufacturing-related processes. Today, life sciences companies have many standalone repositories, requiring different skills, service level agreements, and vendor support costs to manage them. With the amount of content doubling every three to six months, companies have recognized the need to manage unstructured content from the beginning, in order to increase employee productivity and operational efficiency. Using scalable and secure enterprise content management (ECM) solutions, organizations can better manage their unstructured content. These solutions can also be integrated with enterprise resource planning (ERP) systems or research systems, making content available immediately, in the context of the application and within the flow of the employee’s typical business activity. Administrative safeguards—such as content de-duplication—can also be applied within ECM systems, so documents are never recreated, eliminating redundant efforts, ensuring one source of truth, and maintaining content standards in the organization. Putting it in context Consolidating structured and unstructured information in a single system can greatly simplify access to relevant information when it is needed through contextual search. Using contextual filters, results can include therapeutic area, position in the value chain, semantic commonalities, technology-specific factors, specific researchers involved, or potential business impact. The use of taxonomies is essential to organizing information and enabling contextual searches. Taxonomy solutions are composed of a hierarchical tree that defines the relationship between different life science terms. When overlaid with additional indexing related to research and/or business processes, it becomes possible to effectively narrow down the amount of data that is returned during searches, as well as prioritize results based on specific criteria and/or prior search history. Thus, search results are more accurate and relevant to an employee’s day-to-day work. For example, a search for the word "tissue" by a lab researcher would return significantly different results than a search for the same word performed by someone in procurement. Of course, diverse data repositories, combined with the immense amounts of data present in an organization, necessitate that the data elements be regularly indexed and cached beforehand to enable reasonable search response times. In its simplest form, indexing of a single, consolidated data warehouse can be expected to be a relatively straightforward effort. However, organizations require the ability to index multiple data repositories, enabling a single search to reference multiple data sources and provide an integrated results listing. Security and compliance Beyond yielding efficiencies and supporting new insight, an enterprise search environment can support important security considerations as well as compliance initiatives. For example, the systems enable organizations to retain the relevance and the security of the indexed systems, so users can only see the results to which they are granted access. This is especially important as life sciences companies are working in an increasingly networked environment and need to provide secure, role-based access to information across multiple partners. Although not officially required by the 21 CFR Part 11 regulation, the U.S. Food and Drug Administraiton has begun to extend the type of content considered when performing relevant audits and discoveries. Having an ECM infrastructure that provides centralized management of all content enterprise-wide—with the ability to consistently apply records and retention policies along with the appropriate controls, validations, audit trails, and electronic signatures—is becoming increasingly critical for life sciences companies. Making the move Creating an enterprise-wide ECM environment requires moving large amounts of content into a single enterprise repository, a daunting and risk-laden initiative. The first key is to focus on data taxonomy, allowing content to be mapped across systems. The second is to take advantage new tools which can dramatically speed and reduce the cost of the data migration process through automation. Additional content need not be frozen while it is migrated, enabling productivity throughout the process. The ability to effectively leverage information into success has been gaining importance in the life sciences industry for years. The rapid adoption of enterprise content management, both in operational processes as well as in scientific management, are clear indicators that the companies are looking to use all available data to be better informed, improve decision making, minimize risk, and increase time to market, to maintain profitability and be more competitive. As more and more varieties and sources of information are brought under the strategic management umbrella, the ability to divine knowledge from the vast pool of information is increasingly difficult. Simple search engines and basic content management are increasingly unable to effectively extract the right information from the mountains of data available. By bringing these tools into context and integrating them with business processes and applications, we can effectively focus on the right decisions that make our organizations more profitable. More Information Oracle will be exhibiting at DIA 2012 in Philadelphia on June 25-27. Stop by our booth Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} (#2825) to learn more about the advantages of a centralized ECM strategy and see the Oracle WebCenter Content solution, our 21 CFR Part 11 compliant content management platform.

    Read the article

  • ImgBurn fails to burn data CD-R disk due to "Layouts do not match" error

    - by 0xAether
    I have a reoccurring problem with the program ImgBurn. Whenever I try and burn anything to a CD-R using ImgBurn it burns just fine, except for when I go and verify the disk. It tells me that the "Layouts do not match". Windows 7 shows the disk as completely blank. Although, I see on the bottom of the disk it has been written to. I can burn ISO files to DVD-R's just fine. This only seems to happen with CD-R's. The CD-R's I'm using are Memorex Cool Colors 52x CD-R's. I have looked on Google, and it seems like I'm not the only one this happens to. Unfortunately, no one is able to provide an explanation. I have included the log file from the last CD I just burnt. If you need anything else to better diagnose this problem, I will gladly provide it. ; //****************************************\\ ; ImgBurn Version 2.5.7.0 - Log ; Monday, 19 November 2012, 16:11:57 ; \\****************************************// ; ; I 16:04:55 ImgBurn Version 2.5.7.0 started! I 16:04:55 Microsoft Windows 7 Ultimate x64 Edition (6.1, Build 7601 : Service Pack 1) I 16:04:55 Total Physical Memory: 4,156,380 KB - Available: 3,317,144 KB I 16:04:55 Initialising SPTI... I 16:04:55 Searching for SCSI / ATAPI devices... I 16:04:56 -> Drive 1 - Info: Optiarc DVD RW AD-7560S SH03 (D:) (SATA) I 16:04:56 Found 1 DVD±RW/RAM! I 16:05:37 Operation Started! I 16:05:37 Source File: C:\Users\Aaron\Desktop\VMware Workstation 9.iso I 16:05:37 Source File Sectors: 223,057 (MODE1/2048) I 16:05:37 Source File Size: 456,820,736 bytes I 16:05:37 Source File Volume Identifier: VMwareWorksta9 I 16:05:37 Source File Volume Set Identifier: 20121119_2102 I 16:05:37 Source File File System(s): ISO9660, Joliet I 16:05:37 Destination Device: [1:0:0] Optiarc DVD RW AD-7560S SH03 (D:) (SATA) I 16:05:37 Destination Media Type: CD-R (Disc ID: 97m17s06f, Moser Baer India) I 16:05:37 Destination Media Supported Write Speeds: 10x, 16x, 20x, 24x I 16:05:37 Destination Media Sectors: 359,847 I 16:05:37 Write Mode: CD I 16:05:37 Write Type: SAO I 16:05:37 Write Speed: 6x I 16:05:37 Lock Volume: Yes I 16:05:37 Test Mode: No I 16:05:37 OPC: No I 16:05:37 BURN-Proof: Enabled W 16:05:37 Write Speed Miscompare! - MODE SENSE: 1,764 KB/s (10x), GET PERFORMANCE: 11,080 KB/s (63x) W 16:05:37 Write Speed Miscompare! - MODE SENSE: 1,764 KB/s (10x), GET PERFORMANCE: 11,080 KB/s (63x) W 16:05:37 Write Speed Miscompare! - MODE SENSE: 1,764 KB/s (10x), GET PERFORMANCE: 11,080 KB/s (63x) W 16:05:37 Write Speed Miscompare! - MODE SENSE: 1,764 KB/s (10x), GET PERFORMANCE: 11,080 KB/s (63x) W 16:05:37 Write Speed Miscompare! - MODE SENSE: 1,764 KB/s (10x), GET PERFORMANCE: 11,080 KB/s (63x) W 16:05:37 Write Speed Miscompare! - Wanted: 1,058 KB/s (6x), Got: 1,764 KB/s (10x) / 11,080 KB/s (63x) W 16:05:37 The drive only supports writing these discs at 10x, 16x, 20x, 24x. I 16:05:38 Filling Buffer... (80 MB) I 16:05:40 Writing LeadIn... I 16:06:07 Writing Session 1 of 1... (1 Track, LBA: 0 - 223056) I 16:06:07 Writing Track 1 of 1... (MODE1/2048, LBA: 0 - 223056) I 16:11:00 Synchronising Cache... I 16:11:18 Exporting Graph Data... I 16:11:18 Graph Data File: C:\Users\Aaron\AppData\Roaming\ImgBurn\Graph Data Files\Optiarc_DVD_RW_AD-7560S_SH03_MONDAY-NOVEMBER-19-2012_4-05_PM_97m17s06f_6x.ibg I 16:11:18 Export Successfully Completed! I 16:11:18 Operation Successfully Completed! - Duration: 00:05:41 I 16:11:18 Average Write Rate: 1,522 KB/s (10.1x) - Maximum Write Rate: 1,544 KB/s (10.3x) I 16:11:18 Cycling Tray before Verify... W 16:11:23 Waiting for device to become ready... I 16:11:47 Device Ready! E 16:11:47 CompareImageFileLayouts Failed! - Session Count Not Equal (1/0) E 16:11:47 Verify Failed! - Reason: Layouts do not match. I 16:11:57 Close Request Acknowledged I 16:11:57 Closing Down... I 16:11:57 Shutting down SPTI... I 16:11:57 ImgBurn closed!

    Read the article

  • Reconstructing the disk order in RAID 6 with 7 disks

    - by rkotulla
    a little background to this question first: I am running a RAID-6 within a QNAP TS869L external RAID/NAS system. I started with 5 disks of 3 TB each back in the day, and later added another 2 disks of 3TB to the RAID. The QNAP internals handled the growing and re-syncing etc, and everything seemd to be perfectly fine. About 2 weeks ago, I had one of the disks (disk #5, disk #2 has gone bad in the mean time) fail, and somehow (I have no idea why), also disks 1 and 2 got kicked out of the array. I replaced disk #5, but the RAID didn't start working again. After some calls to QNAP technical support, they re-created the array (using mdadm --create --force --assume-clean ...), but the resulting array couldn't find a filesystem, and I was kindly referred to contact a data recovery company that I can't afford. After some digging through old log files, resetting the disk to factory default, etc, I found a few errors that were made during this re-create - I wish I still had some of the original metadata, but unfortunately i don't (I definitely learned that lesson). I'm currently at the point where I know the correct chunk-size (64K), metadata-version (1.0; factory default was 0.9, but from what I read 0.9 doesn't handle disks over 2 TB, mine are 3 TB), and I now find the ext4 filesystem that should be on the disks. Only variable left to determine is the right disk order! I started using the description found in answer #4 of "Recover RAID 5 data after created new array instead of re-using" but am a little confused on what the order should be for a proper RAID-6. RAID-5 is pretty well documented in a number of places, but RAID-6 much less so. Also, does the layout, i.e. distribution of parity and data chunks across the disks, change after the growing of the array from 5 to 7 disks, or does the re-sync re-organize them in such a way a native 7-disk RAID-6 would have been? Thanks some more mdadm output that might be helpful: mdadm version: [~] # mdadm --version mdadm - v2.6.3 - 20th August 2007 mdadm details from one of the disks in the array: [~] # mdadm --examine /dev/sda3 /dev/sda3: Magic : a92b4efc Version : 1.0 Feature Map : 0x0 Array UUID : 1c1614a5:e3be2fbb:4af01271:947fe3aa Name : 0 Creation Time : Tue Jun 10 10:27:58 2014 Raid Level : raid6 Raid Devices : 7 Used Dev Size : 5857395112 (2793.02 GiB 2998.99 GB) Array Size : 29286975360 (13965.12 GiB 14994.93 GB) Used Size : 5857395072 (2793.02 GiB 2998.99 GB) Super Offset : 5857395368 sectors State : clean Device UUID : 7c572d8f:20c12727:7e88c888:c2c357af Update Time : Tue Jun 10 13:01:06 2014 Checksum : d275c82d - correct Events : 7036 Chunk Size : 64K Array Slot : 0 (0, 1, failed, 3, failed, 5, 6) Array State : Uu_u_uu 2 failed mdadm details for the array in the current disk-order (based on my best guess reconstructed from old log-files) [~] # mdadm --detail /dev/md0 /dev/md0: Version : 01.00.03 Creation Time : Tue Jun 10 10:27:58 2014 Raid Level : raid6 Array Size : 14643487680 (13965.12 GiB 14994.93 GB) Used Dev Size : 2928697536 (2793.02 GiB 2998.99 GB) Raid Devices : 7 Total Devices : 5 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Tue Jun 10 13:01:06 2014 State : clean, degraded Active Devices : 5 Working Devices : 5 Failed Devices : 0 Spare Devices : 0 Chunk Size : 64K Name : 0 UUID : 1c1614a5:e3be2fbb:4af01271:947fe3aa Events : 7036 Number Major Minor RaidDevice State 0 8 3 0 active sync /dev/sda3 1 8 19 1 active sync /dev/sdb3 2 0 0 2 removed 3 8 51 3 active sync /dev/sdd3 4 0 0 4 removed 5 8 99 5 active sync /dev/sdg3 6 8 83 6 active sync /dev/sdf3 output from /proc/mdstat (md8, md9, and md13 are internally used RAIDs holding swap, etc; the one I'm after is md0) [~] # more /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] md0 : active raid6 sdf3[6] sdg3[5] sdd3[3] sdb3[1] sda3[0] 14643487680 blocks super 1.0 level 6, 64k chunk, algorithm 2 [7/5] [UU_U_UU] md8 : active raid1 sdg2[2](S) sdf2[3](S) sdd2[4](S) sdc2[5](S) sdb2[6](S) sda2[1] sde2[0] 530048 blocks [2/2] [UU] md13 : active raid1 sdg4[3] sdf4[4] sde4[5] sdd4[6] sdc4[2] sdb4[1] sda4[0] 458880 blocks [8/7] [UUUUUUU_] bitmap: 21/57 pages [84KB], 4KB chunk md9 : active raid1 sdg1[6] sdf1[5] sde1[4] sdd1[3] sdc1[2] sda1[0] sdb1[1] 530048 blocks [8/7] [UUUUUUU_] bitmap: 37/65 pages [148KB], 4KB chunk unused devices: <none>

    Read the article

  • "The server closed the connection without sending any data"

    - by Toby
    Server setup The problem Diagnostic information What I've tried Specific Help needed 1. I have the following server setup: Debian Squeeze Linux 2.6.32-5-amd64 Apache2-mpm-prefork 2.2.16-6+squeeze10 PHP 5.3.3-7+squeeze14 This server is protected with the Suhosin Patch 0.9.9.1 Max Requests Per Child: 0 - Keep Alive: on - Max Per Connection: 100 Timeouts Connection: 300 - Keep-Alive: 15 Loaded Modules core mod_log_config mod_logio prefork http_core mod_so mod_alias mod_auth_basic mod_auth_digest mod_authn_file mod_authz_default mod_authz_groupfile mod_authz_host mod_authz_user mod_cgi mod_deflate mod_dir mod_env mod_mime mod_negotiation mod_php5 mod_reqtimeout mod_rewrite mod_setenvif mod_ssl mod_status Wordpress 3.4.2 (Upgrading to 3.5 soon :) 2. The problem When I restart the server (sudo shutdown -r now), going to any website page results in the following error from the web browser (in this case, Google Chrome, but other browsers also show the same error). This error can also occur an hour or so after all is working ok, seemingly randomly, which is my biggest concern as it means my server is not reliable: No data received Unable to load the web page because the server sent no data. Here are some suggestions: Reload this web page later. Error 324 (net::ERR_EMPTY_RESPONSE): The server closed the connection without sending any data. 3. Diagnostic information The apache error log contains the folowing entries: [Fri Dec 14 22:23:27 2012] [notice] child pid 1955 exit signal Floating point exception (8) [Fri Dec 14 22:23:27 2012] [notice] child pid 1956 exit signal Floating point exception (8) [Fri Dec 14 22:23:29 2012] [notice] child pid 1957 exit signal Floating point exception (8) [Fri Dec 14 22:23:30 2012] [notice] child pid 1958 exit signal Floating point exception (8) [Fri Dec 14 22:23:32 2012] [notice] child pid 1959 exit signal Floating point exception (8) [Fri Dec 14 22:23:32 2012] [notice] child pid 1960 exit signal Floating point exception (8) [Fri Dec 14 22:23:34 2012] [notice] child pid 1961 exit signal Floating point exception (8) [Fri Dec 14 22:23:34 2012] [notice] child pid 1962 exit signal Floating point exception (8) 4. What I've tried a) I can 'fix' the website temporarily by resetting the server twice (resetting it once does not work) using the following commands. NB: the 'reload' option does not work, I have to use restart twice. However, the error can reoccur sometime later. sudo /etc/init.d/apache2 restart sudo /etc/init.d/apache2 restart b) I have tried disabling suhosin by uninstalling php5-suhosin, but a php info page still shows "This server is protected with the Suhosin Patch 0.9.9.1". I have tried putting Suhosin into simulation mode by creating a file /etc/php5/apache2/conf.d/suhosin.ini containing: [suhosin] suhosin.simulation = On The php info page shows the suhosin.ini file in the list of "Additional .ini files parsed" but the php info page still shows "This server is protected with the Suhosin Patch 0.9.9.1" c) Increasing the PHP memory limit In /etc/php5/apache2/ : ; Maximum amount of memory a script may consume (128MB) ; http://php.net/memory-limit memory_limit = 512M d) Disabling all Wordpress plugins, and going back to the default theme. 5. Specific help needed I would very much like help in debugging what is going on here. I am not sure how to determine what processes are in the Apache error log which are exiting "[notice] child pid 1955 exit signal Floating point exception (8)", or what is causing them to exit. And whether suhosin is part of the problem (and how to disable it if it is). Thank you in advance for any advice or tips you can offer in helping me debug this.

    Read the article

  • How to save a ntfs partition which suddenly became empty

    - by SteveO
    One ntfs partition of my laptop was suddenly wiped out without any notice to me, when I rebooted from Windows 7 to Ubuntu 12.04 today. I am in need of help to save my files on that partition, which are important and unfortunately haven't been backed up yet. My laptop has two operating systems: Windows 7 and Ubuntu 12.04. with a ntfs partition shared between the two operating systems for storing some data files (109GB, about 97%of which has been used). I have almost always been using Ubuntu, but today I happened to have to work under Windows. Following is a record of what happened in the time order, numbering according to which operating system I was in at each stage. When I started into Windows 7, right before being able to log in, it took a while and two reboots to configure the Windows. I thought it was normal, since last time when I was using Windows two weeks ago, it took very long and several reboots to update Windows, since the last time I used Windows before then was in November last year. Then after finally being able to log in Windows 7, I installed Libre Office, MathType (I got it from http://dl.portablesoft.org/down/?id=2515, which I originally thought was a trial version, but later I learned was a cracked version and felt wrong. I made a copy of it at dropbox http://dl.dropbox.com/u/13029929/MathType_6.8_PortableSoft.rar, not for distributing it but to list it there just in case it will help to identify the problem), and MikTex. I then edited some .doc files in the ntfs partition under both Microsoft Office with MathType, and Libre Office. When I finished working under Windows and rebooted into Ubuntu, Ubuntu did some filesystem checking and reported that the ntfs partition was not able to be mounted. Then I rebooted again into Windows, and found that the ntfs partition had been emptied, i.e. all the data files were gone, and only one system file bootsqm.dat and one system directory System Volume Information were there, with their last updated time being the time when I first rebooted from Windows to Ubuntu (in fact, it is 4 hours in advanced than the actual time of that rebooting , see immediately below) Also I noticed that the time shown by Windows is not correct for my time zone (UTC-05:00) Eastern Time (US & Canada)), which is 4 hours in advance than the correct time (my current time is 3am, but the computer shows 7am). Same things happened when I rebooted into Ubuntu again: the ntfs has been emptied and left with only one Windows system file bootsqm.dat and one Windows system directory System Volume Information. the time shown by Ubuntu is 4 hours in advance than the correct time. I wonder what I can do to retrieve my data files back on the ntfs partition? If I am not able to do it myself, will some professionals be able to help me out? Thanks a lot! PS: I didn't think I did any thing that required emptying that partition. But there were quite some works I did during that stage right before the reboot from Windows to Ubuntu when the problem occured. Did I make any mis-operation?

    Read the article

  • WMI permissions: Select CommandLine, ProcessId FROM Win32_Process returns no data for CommandLine

    - by user57935
    Hi all, I am gathering performance data via WMI and would like to avoid having to use an account in the Administrators group for this purpose. The target machine is running Windows Server 2003 with the latest SP/updates. I've done what I believe to be the appropriate configuration to allow our user access to WMI (similar to what is described here: http://msdn.microsoft.com/en-us/library/aa393266.aspx). Here are the specific steps that were followed: Open Administrative Tools - Computer Management: Under Computer Management (Local) Expand Services and Applications, right click WMI Control and select properties. In the Security tab, expand Root, highlight CIMV2, click Security (near bottom of window); add Performance Monitor Users and enable the options : Enable Account and Remote Enable. ­Open Administrative Tools - Component Services: Under Console Root go to Component Services- Computers - Right click My Computer and select properties, select the COM security tab, in “Access Permissions” click "Edit Default" select(or add then select) “Performance Monitor Users” group and allow local access and remote access and click ok. In “Launch and Activation Permissions” click “Edit Default” select(or add then select) “Performance Monitor Users” group and allow Local and Remote Launch and Activation Permissions. ­Open Administrative Tools - Component Services: Under Console Root go to Component Services- Computers - My Computer - DCOM Config - highlight “Windows Management and Instrumentation” right click and select properties, Select the Security tab, Under “Launch and Activation Permissions” select Customize, then click edit, add the “Performance Users Group” and allow local and remote Remote Launch and Remote Activation privileges. I am able to connect remotely via WMI Explorer but when I perform this query: Select CommandLine, ProcessId FROM Win32_Process I get a valid result but every row has an empty CommandLine. If I add the user to the Administrators group and re-run the query, the CommandLine column contains the expected data. It seems there is a permission I am missing somewhere but I am not having much luck tracking it down. Many thanks in advance.

    Read the article

  • WMI permissions: Select CommandLine, ProcessId FROM Win32_Process returns no data for CommandLine

    - by user57935
    I am gathering performance data via WMI and would like to avoid having to use an account in the Administrators group for this purpose. The target machine is running Windows Server 2003 with the latest SP/updates. I've done what I believe to be the appropriate configuration to allow our user access to WMI (similar to what is described here: http://msdn.microsoft.com/en-us/library/aa393266.aspx). Here are the specific steps that were followed: Open Administrative Tools - Computer Management: Under Computer Management (Local) Expand Services and Applications, right click WMI Control and select properties. In the Security tab, expand Root, highlight CIMV2, click Security (near bottom of window); add Performance Monitor Users and enable the options : Enable Account and Remote Enable. ­Open Administrative Tools - Component Services: Under Console Root go to Component Services- Computers - Right click My Computer and select properties, select the COM security tab, in “Access Permissions” click "Edit Default" select(or add then select) “Performance Monitor Users” group and allow local access and remote access and click ok. In “Launch and Activation Permissions” click “Edit Default” select(or add then select) “Performance Monitor Users” group and allow Local and Remote Launch and Activation Permissions. ­Open Administrative Tools - Component Services: Under Console Root go to Component Services- Computers - My Computer - DCOM Config - highlight “Windows Management and Instrumentation” right click and select properties, Select the Security tab, Under “Launch and Activation Permissions” select Customize, then click edit, add the “Performance Users Group” and allow local and remote Remote Launch and Remote Activation privileges. I am able to connect remotely via WMI Explorer but when I perform this query: Select CommandLine, ProcessId FROM Win32_Process I get a valid result but every row has an empty CommandLine. If I add the user to the Administrators group and re-run the query, the CommandLine column contains the expected data. It seems there is a permission I am missing somewhere but I am not having much luck tracking it down. Many thanks in advance.

    Read the article

  • XP machines on Domain not reporting WMI Data in a 2003 Server Environment

    - by Az
    I am running into a very quirky issue and I hope someone out there can help. We use a monitoring program for several networks we oversee that is WMI data dependent for a great deal of it's functionality. The Windows 2000 Professional workstations, as well as the 2003 servers in our network report WMI data fine, the Windows XP professional machines will not let me view them from within the WMI snap in for MMC (they return a Win32: Access Denied) error. I am of course logged in with an account with domain admin privileges on the domain controller when I attempt it. DCOM is enabled in component services, and the remote security option is set to allow as well. If we remove the machine from the domain and rejoin it, some workstations will show up as WMI enabled temporarily and then when I try to access them again later I get the access denied error again out of the blue. Hoping someone out there has had a similar problem or they have advice. I have had this problem with the firewall turned on or off. Thanks for your time! -Az

    Read the article

  • DKIM- Filter No Signature Data

    - by Vineet Sharma
    I have installed DKIM-Filter on Postfix after reading this tutorial http://www.unibia.com/unibianet/systems-networking/how-setup-domainkeys-identified-mail-dkim-postfix-and-ubuntu-server My email now has a DKIM signature but still it is landing in the SPAM folder. Here is the header Received-SPF: neutral (google.com: 69.164.193.167 is neither permitted nor denied by best guess record for domain of [email protected]) client-ip=69.164.193.167; Authentication-Results: mx.google.com; spf=neutral (google.com: 69.164.193.167 is neither permitted nor denied by best guess record for domain of [email protected]) [email protected]; dkim=hardfail (test mode) [email protected] Received: from promote.a2labs.in (localhost [127.0.0.1]) by promote.a2labs.in (Postfix) with ESMTPA id 34858530E8 for <[email protected]>; Mon, 28 Feb 2011 12:23:07 +0530 (IST) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=a2labs.in; s=mail; t=1298875987; bh=bo+H1VYPIHMja2u7i1lnzr4k/j4Pe8iSf79bVw94XpI=; h=To:Subject:Message-ID:Date:From:Reply-To:MIME-Version: Content-Type:Content-Transfer-Encoding; b=nhTdlnUwo0iUJ92ycQzKSRjw 5Pfya0DJcJrAc8Mr2hIv8OLpgzBCzdOMWTGqR5nuUmAzgCGYBhYAM2XZwVxo9JG/iz7 oYKysmNQnskFx0TRyW3UOkDWcfHcPnCL6Y7fGzZWinmsyjsg47k+mKZg/e8jqlwTAMO PYKkt5pBz7SM0= Also my mail.err file shows Feb 28 12:17:03 ivineet dkim-filter[32181]: 1F788530E1: no signature data Feb 28 12:18:02 ivineet dkim-filter[32181]: 432BA530E2: no signature data How to fix it

    Read the article

  • Dell Poweredge 2600 RAID Transfer How-to

    - by DCookie
    Help, please! Hardware: Dell Poweredge 2600 PERC 4 SCSI Drives, 1 standalone 3 in a RAID 5 configuration OS: Windows 2000 Server In other words, a fairly old system. Anyway, we are in the process of taking over support for this site. The current tech wants out and is fading from view fast, so we need to solve this problem: The standalone disk (where the OS was) failed. We've replaced the disk, installed the OS, but need to know exactly how to proceed from here. I've never worked with a RAID system before, so I don't want to touch anything without knowing what I'm doing. We are not certain if the site will want us to attempt to recover the array or wait for the old tech to become available. We have replaced the server with a temporary box, and recovered MOST of the data from an online backup service. However, the other tech failed to backup a part of the data and the only copy of it is on this RAID array. Hence, our caution. We have poked minimally around in the boot-up PERC config utility, and it seems to me that that's where we'll need to be to reclaim the array. Another possibility is that there is some Dell software for the RAID controller we need to acquire. Can anyone provide clues as to how to proceed from here? Any help GREATLY appreciated.

    Read the article

< Previous Page | 490 491 492 493 494 495 496 497 498 499 500 501  | Next Page >