Search Results

Search found 57200 results on 2288 pages for 'greg lunsford@oracle com'.

Page 156/2288 | < Previous Page | 152 153 154 155 156 157 158 159 160 161 162 163  | Next Page >

  • Welcome to Jackstown

    - by fatherjack
    I live in a small town, the population count isn't that great but let me introduce you to some of the population. We'll start with Martin the Doc, he fixes up anything that gets poorly, so much so that he could be classed as the doctor, the vet and even the garage mechanic. He's got a reputation that he can fix anything and that hasn't been proved wrong yet. He's great friends with Brian (who gets called "Brains") the teacher who seems to have a sound understanding of any topic you care to pass his way. If he isn't sure he tells you and then goes to find out and comes back with a full answer real quick. Its good to have that sort of research capability close at hand. Brains is also great at encouraging anyone who needs a bit of support to get them up to speed and working on their jobs. Steve sees Brains regularly, that's because he is the librarian, he keeps all sorts of reading material and nowadays there's even video to watch about any topic you like. Steve keeps scouring all sorts of places to get the content that's needed and he keeps it in good order so that what ever is needed can be found quickly. He also has to make sure that old stuff gets marked as probably out of date so that anyone reading it wont get mislead. Over the road from him is Greg, he's the town crier. We don't have a newspaper here so Greg keeps us all informed of what's going on "out of town" - what new stuff we might make use of and what wont work in a small place like this. If we are interested he goes ahead and gets people in to demonstrate their products  and tell us about the details. Greg is pretty good at getting us discounts too. Now Greg's brother Ian works for the mayors office in the "waste management department" nowadays its all about the recycling but he still has to make sure that the stuff that cant be used any more gets disposed of properly. It depends on the type of waste he's dealing with that decides how it need to be treated and he has to know a lot about the different methods and when to use which ones. There are two people that keep the peace in town, Brent is the detective, investigating wrong doings and applying justice where necessary and Bart is the diplomat who smooths things over when any people have a dispute or disagreement. Brent is meticulous in his investigations and fair in the way he handles any situation he finds. Discretion is his byword. There's a rumour that Bart used to work for the United Nations but what ever his history there is no denying his ability to get apparently irreconcilable parties working together to their combined benefit. Someone who works closely with Bart is Brad, he is the translator in town. He has several languages that he can converse in but he can also explain things from someone's point of view or  and make it understandable to someone else. To keep things on the straight and narrow from a legal perspective is Ben the solicitor, making sure we all abide by the rules.Two people who make for an interesting evening's conversation if you get them together are Aaron and Grant, Aaron is the local planning inspector and Grant is an inventor of some reputation. Anything being constructed around here needs Aarons agreement. He's quite flexible in his rules though; if you can justify what you want to do with solid logic but he wont stand for any development going on without his inclusion. That gets a demolition notice and there's no argument. Grant as I mentioned is the inventor in town, if something can be improved or created then Grant is your man. He mainly works on his own but isnt averse to getting specific advice and assistance from specialist from out of town if they can help him finish his creations.There aren't too many people left for you to meet in the town, there's Rob, he's an ex professional sportsman. He played Hockey, Football, Cricket, you name it. He was in his element as goal keeper / wicket keeper and that shows in his personal life. He just goes about his business and people often don't even know that he's helped them. Really low profile, doesn't get any glory but saves people from lots of problems, even disasters on occasion. There goes Neil, he's a bit of an odd person, some people say he's gifted with special clairvoyant powers, personally I think he's got his ear to the ground and knows where to find out the important news as soon as its made public. Anyone getting a visit from Neil is best off to follow his advice though, he's usually spot on and you wont be caught by surprise if you follow his recommendations – wherever it comes from.Poor old Andrew is the last person to introduce you to. Andrew doesn't show himself too often but when he does it seems that people find a reason to blame him for their problems, whether he had anything to do with their predicament or not. In all honesty, without fail, and to his great credit, he takes it in good grace and never retaliates or gets annoyed when he's out and about.  It pays off too as its very often the case that those who were blaming him recently suddenly find they need his help and they readily forget the issues pretty rapidly.And then there's me, what do I do in town? Well, I'm just a DBA with a lot of hats. (Jackstown Pop. 1)

    Read the article

  • Oracle R12 Inventory Management New Features Wrap-Up

    - by [email protected]
    Webcast: Oracle R12 Inventory Management New FeaturesHeld March 31st, 2010 Oracle Inventory management is an integrated part of Oracle SCM (Supply Chain Management). In this session you will see a comprehensive look of changed feature in Oracle R12 Inventory Management. This session will highlight about the new features added and also explore there functionalities. This webinar recording will introduce you to the built-in features of Oracle R12 Inventory Management such as: OPM Inventory Convergence Multi-mode Inventory Management Material Traceability Fulfillment Optimization Extended Best Practices View Oracle R12 Inventory Management New Features Webinar Online, Click Here: http://www.iwarelogic.com/oracle-r12-inventory-management-new-features.htm

    Read the article

  • Que es Virtualbox?

    - by [email protected]
    VIRTUALBOX (Open Source para virtualización)Las herrramientas de virtualización se han puesto de moda de manera casi exponencial durante los últimos años. Una de ellas es VirtualBox, esta corre sobre diferentes sistemas operativos y para los desarrolladores e ingenieros en computo que desean realizar pruebas de productos sin afectar sus maquinas, esta es una de las mejores alternativas. Adicionalmente hay algo que tiene VirtualBox que aun es mejor, es un program Open Source, eso significa que la podras usar en tu maquina teniendo un costo de $0.Esta herramienta realiza basicamente la misma funcion que Vmware Workstation, e incluso puede ejecutar la maquinas virtuales creadas con vmware workstation sin necesidad de realizar conversiones de alguna manera.La ultima version disponible al dia de hoy es la 3.1.6 y este puede ser descargado facilmente.  Solo visita uno de estos sites."Get  the lastest VirtualBox version at here"http://www.virtualbox.org/wiki/Downloadso en la pagina de Oracle"Get  the lastest VirtualBox version at here"http://dlc.sun.com/virtualbox/vboxdownload.html

    Read the article

  • Oracle Transportation Management Annual Customer Conference

    - by [email protected]
    The 2010 Oracle Transportation Management (OTM) Conference will be held June 13-16 in Philadelphia, Pennsylvania. The conference brings together all things OTM: users, prospective users, development personnel, product strategy, implementation experts, and software and services partners.  With over 200 attendees, this conference is the premiere location and time to learn about OTM, build relationships with peers, and get answers to all your OTM questions.    This year's conference will be held at the: Sheraton Society Hill, One Dock St., Philadelphia, PA. 19106Companies speaking at this year's event include:AT&T Land O Lakes BlueScope Steel Baillie Lumber Kraft Sears Roseburg Forest Products Toyota Beckman CoulterLevi StraussNiagara BottlingSmurfit StonePQ CorporationOffice Depot               To register click here http://www.otmconference.com/ConfAgenda.aspx.  1

    Read the article

  • Upcoming User Group Events in 2011

    - by john.orourke(at)oracle.com
    At a recent customer event, someone asked me if Oracle had any plans to re-create the Hyperion Solutions Conference.  Unfortunately the answer is no.  With so many different product lines it would be challenging and costly for Oracle to run separate user conferences for every product line, and it would create too many events for customers with multiple products to attend.  So Oracle Open World is the company's main event for showcasing what's new and what's coming across all product lines.  If customers find Oracle OpenWorld too overwhelming or if the timing is bad, there are a number of other conferences, which are run by Oracle user groups and include a number of sessions focused on Oracle Hyperion EPM and BI products.  Here's a sneak preview of what's coming up for conferences in 2011 where you can network with other Hyperion users and learn what's new and what's coming in our products. Alliance 2011:  This conference is run by the Oracle Higher Education User Group (HEUG).  It's being held March 27 - 30th in lovely Denver, Colorado.  (a great location and time for skiers!)  This event is targeted at customers in Higher Education and Public Sector organizations and is expecting to draw over 3,500 attendees.  There will be a number of sessions focusing on Oracle Hyperion EPM and BI products in the Budgeting track, as well as the Reporting & BI track.  This includes product-focused sessions delivered by Oracle and partners, as well as case studies delivered by customers.  Here's a link to the registration page where you can get more information: http://www.heug.org/p/cm/ld/fid=255 Collaborate 2011:  This conference is run by three different user groups;  OAUG, IOUG and Quest.  It's being held April 10 - 14th in sunny Orlando, Florida.  (yes, sunshine and warmth!)  This event is targeted to customers with Oracle E-Business Suite, PeopleSoft, JD Edwards, Hyperion, Primavera and other products and is expected to draw over 5,000 attendees.  You'll find a number of sessions focused on Oracle Hyperion EPM and BI products in the BI/Data Warehousing/EPM track.  This includes product-focused sessions delivered by Oracle, our partners, and customers as well as a number of customer case studies.  There will also be an exhibit area with a number of demo pods focused on EPM and BI products.  Here's a link to the conference web site where you can get more information: http://collaborate.oaug.org/ Also, please note that the OAUG has a Hyperion SIG that runs focused EPM/Hyperion events throughout the year.  Here's a link to their web site where you can get more information: http://hyperionsig.oaug.org/ Kscope 2011:  Formerly the Kaleidoscope conference, this one is run by the Oracle Developer Tools User Group (ODTUG).  This conference is being held June 26 - 30th in Long Beach, CA. (surf's up!)  Historically, this event has focused on Oracle Development tools, but over the past few years the EPM and BI content has grown with over 100 sessions planned this year.  So this event is becoming a great venue for existing Hyperion customers to learn about the latest developments with Oracle Essbase, Hyperion Planning, Hyperion Financial Management, Oracle BI and other products.   You'll also find hands-on workshops, product demonstrations as well as EPM and BI Symposiums run by Oracle Development staff.  Here's a link to the web site where you can get more details.  http://www.kscope11.com/biepm UKOUG Conference Series:  EPM and Hyperion 2011:  For Hyperion customers in the UK, the UKOUG has a Hyperion SIG that runs a focused conference for EPM and Hyperion products.  The 2011 event is planned for June in London.  Here's a link to the web site for this event where you can get more information: http://hyperion.ukoug.org/default.asp?p=8461 In addition to these conferences, you can also find Oracle EPM and BI content at regional user group meetings globally as well as Marketing events run by Oracle.  Check the events page at www.oracle.com for the details on upcoming Marketing and regional User Group events.  So while Oracle will not be trying to replicate the Hyperion Solutions conference, the good news is that there are a number of other events available where customers can find out what's new and what's coming with Oracle EPM and BI products.  And these events are running at different times of the year in different locations - so you can pick the event that makes the most sense for your company from a timing and location standpoint. I'll be delivering a number of sessions at the Alliance and Collaborate conferences and hope to see many of our loyal customers and partners at these events.  And there's always Oracle OpenWorld coming up in October, for which the planning has already started.  I look forward to seeing you in 2011.

    Read the article

  • Oracle & OAUG PO SIG's Procurement Executive Workshop - Burlington, MA April 29th, 2011

    - by david.hope-ross(at)oracle.com
    OAUG PO SIG and Oracle invite you to a day of learning and networking with your Boston area procurement peers. This event is focused on facilitating discussion among procurement executives, promoting best practices from leading customers, and sharing the vision that is driving enhancements to E-Business Suite procurement. OAUG PO SIG members and Oracle will share practical advice that improves technology adoption and lowers risk. Topics of interest include supplier management, upgrades, cloud-based deployment, as well as spend classification and analytics. For more information and registration please visit http://www.oracle.com/us/dm/h2fy11/68745-nafm10012033mpp102-se-334896.html.

    Read the article

  • CIO Magazine's State of the CIO and its Impact on Your EA

    - by david.olivencia(at)oracle.com
    CIO Magazine today released its State of the CIO report.  As most Enterpise Architects report to (or report very close to) the CIO, the report provides interesting insights as to where most CIOs minds and priorities are.  The information will allow Enterprise Architects  to better align plans, approaches, models, and stratagies. The report's summary can be found here:  http://assets.cio.com/documents/cache/pdfs/2011/dec15_gatefold.pdf   Specifically the article highlights: * How IT Makes A Difference * Critical Leadership Skills * Business Focused CIOs * Areas of Increasing Responsibility * Plans for 2015   Enterprise Architects what insights from this report will alter they way you successfully lead in 2011?   David Olivencia | Solution Director, Enterprise Architecture & Exa ServicesOracle Consulting Latin America and Caribbean

    Read the article

  • ArchBeat Link-o-Rama for 2012-08-31

    - by Bob Rhubart
    SOA Suite 11g Asynchronous Testing with soapUI | Greg Mally Greg Mally walks you through testing asynchronous web services with the free edition of soapUI. The Role of Oracle VM Server for SPARC in a Virtualization Strategy | Matthias Pfutzner Matthias Pfutzner's overview of hardware and software virtualization basics, and the role that Oracle VM Server for SPARC plays in a virtualization strategy. Cloud Computing: Oracle RDS on AWS - Connecting with DB tools | Tom Laszewski Cloud expert and author Tom Laszewski shares brief comments about the tools he used to connect two Oracle RDS instances in AWS. Keystore Wallet File – cwallet.sso – Zum Teufel! | Christian Screen "One of the items that trips up a FMW implementation, if only for mere minutes, is the cwallet.sso file," says Oracle ACE Christian Screen. In this short post he offers information to help you avoid landing on your face. Thought for the Day "With good program architecture debugging is a breeze, because bugs will be where they should be." — David May Source: SoftwareQuotes.com

    Read the article

  • SOA 11g Technology Adapters – ECID Propagation

    - by Greg Mally
    Overview Many SOA Suite 11g deployments include the use of the technology adapters for various activities including integration with FTP, database, and files to name a few. Although the integrations with these adapters are easy and feature rich, there can be some challenges from the operations perspective. One of these challenges is how to correlate a logical business transaction across SOA component instances. This correlation is typically accomplished via the execution context ID (ECID), but we lose the ECID correlation when the business transaction spans technologies like FTP, database, and files. A new feature has been introduced in the Oracle adapter JCA framework to allow the propagation of the ECID. This feature is available in the forthcoming SOA Suite 11.1.1.7 (PS6). The basic concept of propagating the ECID is to identify somewhere in the payload of the message where the ECID can be stored. Then two Binding Properties, relating to the location of the ECID in the message, are added to either the Exposed Service (left-hand side of composite) or External Reference (right-hand side of composite). This will give the JCA framework enough information to either extract the ECID from or add the ECID to the message. In the scenario of extracting the ECID from the message, the ECID will be used for the new component instance. Where to Put the ECID When trying to determine where to store the ECID in the message, you basically have two options: Add a new optional element to your message schema. Leverage an existing element that is not used in your schema. The best scenario is that you are able to add the optional element to your message since trying to find an unused element will prove difficult in most situations. The schema will be holding the ECID value which looks something like the following: 11d1def534ea1be0:7ae4cac3:13b4455735c:-8000-00000000000002dc Configuring Composite Services/References Now that you have identified where you want the ECID to be stored in the message, the JCA framework needs to have this information as well. The two pieces of information that the framework needs relates to the message schema: The namespace for the element in the message. The XPath to the element in the message. To better understand this, let's look at an example for the following database table: When an Exposed Service is created via the Database Adapter Wizard in the composite, the following schema is created: For this example, the two Binding Properties we add to the ReadRow service in the composite are: <!-- Properties for the binding to propagate the ECID from the database table --> <property name="jca.ecid.nslist" type="xs:string" many="false">  xmlns:ns1="http://xmlns.oracle.com/pcbpel/adapter/db/top/ReadRow"</property> <property name="jca.ecid.xpath" type="xs:string" many="false">  /ns1:EcidPropagationCollection/ns1:EcidPropagation/ns1:ecid</property> Notice that the property called jca.ecid.nslist contains the targetNamespace defined in the schema and the property called jca.ecid.xpath contains the XPath statement to the element. The XPath statement also contains the appropriate namespace prefix (ns1) which is defined in the jca.ecid.nslist property. When the Database Adapter service reads a row from the database, it will retrieve the ECID value from the payload and remove the element from the payload. When the component instance is created, it will be associated with the retrieved ECID and the payload contains everything except the ECID element/value. The only time the ECID is visible is when it is stored safely in the resource technology like the database, a file, or a queue. Simple Database/File/JMS Example This section contains a simplified example of how the ECID can propagate through a database table, a file, and JMS queue. The composite for the example looks like the following: The flow of this example is as follows: Invoke database insert using the insertwithecidbpelprocess_client_ep Service. The InsertWithECIDBPELProcess adds a row to the database via the Database Adapter. The JCA Framework adds the ECID to the message prior to inserting. The ReadRow Service retrieves the record and the JCA Framework extracts the ECID from the message. The ECID element is removed from the message. An instance of ReadRowBPELProcess is created and it is associated with the retried ECID. The ReadRowBPELProcess now writes the record to the file system via the File Adapter. The JCA Framework adds the ECID to the message prior to writing the message to file. The ReadFile Service retrieves the record from the file system and the JCA Framework extracts the ECID from the message. The ECID element is removed from the message. An instance of ReadFileBPELProcess is created and it is associated with the retried ECID. The ReadFileBPELProcess now enqueues the message via the JMS Adapter. The JCA Framework adds the ECID to the message prior to enqueuing the message. The DequeueMessage Service retrieves the record and the JCA Framework extracts the ECID from the message. The ECID element is removed from the message. An instance of DequeueMessageBPELProcess is created and it is associated with the retried ECID. The logical flow ends. When viewing the Flow Trace in the Enterprise Manger, you will now see all the instances correlated via ECID: Please check back here when SOA Suite 11.1.1.7 is released for this example. With the example you can run it yourself and reinforce what has been shared in this blog via a hands-on experience. One final note: the contents of this blog may be included in the official SOA Suite 11.1.1.7 documentation, but you will still need to come here to get the example.

    Read the article

  • All day optimizer event....

    - by noreply(at)blogger.com (Thomas Kyte)
    I've recently taken over some of the responsibilities of Maria Colgan (also known as the "optimizer lady") so she can move onto supporting our new In Memory Database features (note her new twitter handle for that: https://twitter.com/db_inmemory ).To that end, I have two one day Optimizer classes scheduled this year (and more to follow next year!).  The first one will be Wednesday November 20th in Northern California.  You can find details for that here: http://www.nocoug.org/ .The next one will be 5,500 miles (about 8,800 km) away in the UK - in Manchester.  That'll take place immediately following the UKOUG technical conference taking place the first week of December on December 5th.  You can see all of the details for that here: http://www.ukoug.org/events/tom-kyte-seminar-2013/I know I'll be doing one in Belgrade early next year, probably the first week in April. Stay tuned for details on that and for more to come.

    Read the article

  • Your Supply Chain May Be At Risk for Counterfeiting!

    - by stephen.slade(at)oracle.com
    Competition has driven unscrupulous participants into the supply chain. And your supply chain is exposed along with your customers.   60 Minutes had a terrific segment on fraud in the supply chain. This is a 13 minute clip on our global supply chains and it pertains to us not only as supply chain professionals and participants in supply chain economics, but as consumers and users of the various products that enter our shopping cart.  It's a must see news clip worth sharing   http://www.cbsnews.com/video/watch/?id=7359537n&tag=related;photovideo If you take medicines, you'll want to see this. Dr. Sanjay Gupta participated in this 9 month review of 'bad-drugs' and reports for 60 Minutes on CBS.

    Read the article

  • Save BIG on Storage &mdash; with Oracle Advanced Compression

    - by [email protected]
    Recently, we published a podcast revealing just how much Oracle benefits from its internal use of Oracle Database 11g and Advanced Compression. With hundreds of TB and millions of dollars saved, Oracle Advanced Compression is dramatically reducing storage costs and substantially improving efficiency across the company. Now, here's your chance: Meet the experts, have your questions answered by them and immediately start using your storage more efficiently: On April 14th, join me for a live Webcast with Oracle's Tim Shetler, Vice President of Product Management and Bill Hodak, Principal Product Manager, to learn just how Oracle Advanced Compression can Reduce disk space requirements for all types of data Improve query and storage performance Lower storage costs throughout the datacenter Register here! var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • KOSTENFREIE PRÜFUNGS-GUTSCHEINE - JETZT SICHERN!

    - by michaela.seika(at)oracle.com
    Nutzen Sie die Gelegenheit und kommen Sie während der CeBIT zum Oracle Implementation Specialist Prüfungstag für Oracle Applications, Middleware und Hardware. Um Zeit und Ressourcen optimal einzusetzen, haben Sie am 2. März die Möglichkeit die Implementation Specialist Prüfung direkt vor Ort in Hannover abzulegen.Bitte leiten Sie diese Einladung an Ihre entsprechenden Kollegen weiter, um die Prüfung rechtzeitig einzuplanen: www.pearsonvue.com. Voraussetzung für die Teilnahme ist eine gültige OPN Mitgliedschaft. Der Prüfungsort ist bei den Multi Media Berufsbildenden Schulen (MMBSS), Explo Plaza3, 30539 Hannover.Ihre Registrierung für den Implementation Prüfungstag richten Sie bitte an Frau Michaela Seika bis 18. Februar mit folgenden Daten: Firma, Prüfling & Kontaktdaten, Oracle Company-ID, EDU-PIN, Ja/Nein für die Zuteilung eines Prüfungs-Gutscheins.Bitte um Beachtung die kostenfreien Prüfungs-Gutscheine sind nur bis 2. März 2011 gültig. (Pro Unternehmen werden maximal zwei Gutscheine ausgegeben).

    Read the article

  • Enablement 2.0 Get Specialized!

    - by michaela.seika(at)oracle.com
      Check the new OPN Certified Specialist Exam Study Guides – your quick reference to the training options to guide partners pass OPN Specialist Exams! What are the advantages of the Exam Study Guides? Cover the Implementation Specialist Exams that count towards OPN Specialization program. Capture Exam Topics, Exam Objectives and Training Options. Define the Exam Objectives by learner or practitioner level of knowledge: Learner-level: questions require the candidate to recall information to derive the correct answer Practitioner-level: questions require the candidate to derive the correct answer from an application of their knowledge. Map by each Exam Topic the alternative training options that are available at Oracle. Where to find the Exam Study Guides? On Enablement 2.0 > Spotlight On each Knowledge Zone > Implement On each Specialist Implementation Guided Learning Path For More Information Oracle Certification Program Beta Exams OPN Certified Specialist Exams OPN Certified Specialist FAQ Contact Us Please direct any inquiries you may have to Oracle Partner Enablement team at opn-edu_ww@oracle.com.

    Read the article

  • Will you be at the PASS Summit?

    - by KKline
    Don't forget about the cool services from SQL Sentry for Summiteers, like the free area shuttle and the printed area maps ! Details are in a 5-part series by our CEO, Greg Gonzalez, at http://greg.blogs.sqlsentry.net/ . Are you coming to Charlotte next week for the PASS Summit ? Let's connect! When ever it's open, I'll be in the Exhibit Hall at the SQL Sentry booth unless I'm delivering a session or something of that nature. Here's the sessions I've got on the calendar - Tue, Oct 15: First-timers...(read more)

    Read the article

  • Today is my first day in the land of backbone.js

    - by Andrew Siemer - www.andrewsiemer.com
    I am semi-excited to say that today is my first day into the land of backbone.js.  This will of course take me into many other new javascript-y areas.  As I have primarily been focused on building backend systems for the past many years…with no focus on client side bits…this will be all new ground for me.  Very exciting! I am sure that this endeavor will lead to writing about many new findings along the way.  Expect the subject of near future postings to not be related to MVC or server side code. I am starting this journey by reading through the online book “Backbone Fundamentals”.  http://addyosmani.com/blog/backbone-fundamentals/  Has anyone read this yet?  Any feed back on that title. I have read though Derrick Bailey’s thoughts here and here…also very good. Any suggestions on other nuggets of learning backbone?

    Read the article

  • Taking the Plunge - or Dipping Your Toe - into the Fluffy IAM Cloud by Paul Dhanjal (Simeio Solutions)

    - by Greg Jensen
    In our last three posts, we’ve examined the revolution that’s occurring today in identity and access management (IAM). We looked at the business drivers behind the growth of cloud-based IAM, the shortcomings of the old, last-century IAM models, and the new opportunities that federation, identity hubs and other new cloud capabilities can provide by changing the way you interact with everyone who does business with you. In this, our final post in the series, we’ll cover the key things you, the enterprise architect, should keep in mind when considering moving IAM to the cloud. Invariably, what starts the consideration process is a burning business need: a compliance requirement, security vulnerability or belt-tightening edict. Many on the business side view IAM as the “silver bullet” – and for good reason. You can almost always devise a solution using some aspect of IAM. The most critical question to ask first when using IAM to address the business need is, simply: is my solution complete? Typically, “business” is not focused on the big picture. Understandably, they’re focused instead on the need at hand: Can we be HIPAA compliant in 6 months? Can we tighten our new hire, employee transfer and termination processes? What can we do to prevent another password breach? Can we reduce our service center costs by the end of next quarter? The business may not be focused on the complete set of services offered by IAM but rather a single aspect or two. But it is the job – indeed the duty – of the enterprise architect to ensure that all aspects are being met. It’s like remodeling a house but failing to consider the impact on the foundation, the furnace or the zoning or setback requirements. While the homeowners may not be thinking of such things, the architect, of course, must. At Simeio Solutions, the way we ensure that all aspects are being taken into account – to expose any gaps or weaknesses – is to assess our client’s IAM capabilities against a five-step maturity model ranging from “ad hoc” to “optimized.” The model we use is similar to Capability Maturity Model Integration (CMMI) developed by the Software Engineering Institute (SEI) at Carnegie Mellon University. It’s based upon some simple criteria, which can provide a visual representation of how well our clients fair when evaluated against four core categories: ·         Program Governance ·         Access Management (e.g., Single Sign-On) ·         Identity and Access Governance (e.g., Identity Intelligence) ·         Enterprise Security (e.g., DLP and SIEM) Often our clients believe they have a solution with all the bases covered, but the model exposes the gaps or weaknesses. The gaps are ideal opportunities for the cloud to enter into the conversation. The complete process is straightforward: 1.    Look at the big picture, not just the immediate need – what is our roadmap and how does this solution fit? 2.    Determine where you stand with respect to the four core areas – what are the gaps? 3.    Decide how to cover the gaps – what role can the cloud play? Returning to our home remodeling analogy, at some point, if gaps or weaknesses are discovered when evaluating the complete impact of the proposed remodel – if the existing foundation wouldn’t support the new addition, for example – the owners need to decide if it’s time to move to a new house instead of trying to remodel the old one. However, with IAM it’s not an either-or proposition – i.e., either move to the cloud or fix the existing infrastructure. It’s possible to use new cloud technologies just to cover the gaps. Many of our clients start their migration to the cloud this way, dipping in their toe instead of taking the plunge all at once. Because our cloud services offering is based on the Oracle Identity and Access Management Suite, we can offer a tremendous amount of flexibility in this regard. The Oracle platform is not a collection of point solutions, but rather a complete, integrated, best-of-breed suite. Yet it’s not an all-or-nothing proposition. You can choose just the features and capabilities you need using a pay-as-you-go model, incrementally turning on and off services as needed. Better still, all the other capabilities are there, at the ready, whenever you need them. Spooling up these cloud-only services takes just a fraction of the time it would take a typical organization to deploy internally. SLAs in the cloud may be higher than on premise, too. And by using a suite of software that’s complete and integrated, you can dramatically lower cost and complexity. If your in-house solution cannot be migrated to the cloud, you might consider using hardware appliances such as Simeio’s Cloud Interceptor to extend your enterprise out into the network. You might also consider using Expert Managed Services. Cost is usually the key factor – not just development costs but also operational sustainment costs. Talent or resourcing issues often come into play when thinking about sustaining a program. Expert Managed Services such as those we offer at Simeio can address those concerns head on. In a cloud offering, identity and access services lend to the new paradigms described in my previous posts. Most importantly, it allows us all to focus on what we're meant to do – provide value, lower costs and increase security to our respective organizations. It’s that magic “silver bullet” that business knew you had all along. If you’d like to talk more, you can find us at simeiosolutions.com.

    Read the article

  • Where can I get the 10k common English dictionary words which Stack overflow uses in related question? [migrated]

    - by itpian.com
    Where can I get the 10k common English dictionary words which Stack overflow uses in related question? Here in SE podcast - http://blog.stackoverflow.com/2008/12/podcast-32/ One of our major performance optimizations for the “related questions” query is removing the top 10,000 most common English dictionary words (as determined by Google search) before submitting the query to the SQL Server 2008 full text engine. It’s shocking how little is left of most posts once you remove the top 10k English dictionary words. This helps limit and narrow the returned results, which makes the query dramatically faster.

    Read the article

  • ??????? ????? ??????????? ????????? Oracle

    - by [email protected]
      ?? ???????? ?????? ? My Oracle Support ? ????????? ????????? ???????? ?????? ??:  hotline-russia_ru@oracle.com, ??????? ? ??????: +7 495 641 155, ???????????-???????, ? 9 ?? 18 ????? ?? ??????????? ??????? ? 18 ?? 9 ????? - ?????????? ?????? ??????????? ?????????: ???. +44 870 400 0902, +44 870 400 0904)  

    Read the article

  • apache2 namevirtualhost resolving wrong site

    - by joe
    Running apache 2.2.6. I'm setting up a development environment. dev and production will be hosted on the same machine, same IP address. DNS entries like prod.domain.com and dev.domain.com point to the same IP. * Imprortant: it is required that dev and prod are otherwise completely separate. Each will run it's own apache instance. Each will use it's own apache configuration. Each, prod and dev, will host http and https. I have this set up and working, but not as restrictive as I'd like. For instance, the production config: NameVirtualHost *:80 NameVirtualHost *:443 <VirtualHost *:80 > ServerName prod.domain.com # ... etc </VirtualHost> <VirtualHost *:443 > ServerName prod.domain.com # ... etc </VirtualHost> The dev site is set up similarly, using ports 8080 and 4443. Each site works fine. But assuming both apaches are running, one can also hit "cross-site" by mistake. So, inadvertently hitting prod.domain.com:8080 successfully returns a page from the dev site. It would be much better if this failed completely. This is a bit more difficult to solve (for me) because of the need for two apache configs. If all in one, the single process would have full knowledge of everything. So, I tried to solve this with brute force, including virtual hosts for the "other" site, with something that would fail, like no access to documentroot. But apache then inexplicably finds the "wrong" virtual host. Here's the full config for production, with the dummy dev configs. NameVirtualHost *:80 NameVirtualHost *:443 # ---------------------------------------------- # DUMMY HOSTS <VirtualHost *:8080 > ServerName dev.domain.com:8080 DocumentRoot /tmp/ <Directory /tmp/ > Order deny,allow Deny from all </Directory> </VirtualHost> <VirtualHost *:4443 > ServerName dev.domain.com:4443 DocumentRoot /tmp/ <Directory /tmp/ > Order deny,allow Deny from all </Directory> </VirtualHost> # ---------------------------------------------- # REAL PRODUCTION HOSTS <VirtualHost *:80 > ServerName prod.domain.com:80 DocumentRoot /something/valid/ <Directory /something/valid/> Order allow,deny Allow from all </Directory> </VirtualHost> <VirtualHost *:443 > ServerName prod.domain.com:443 DocumentRoot /something/valid/ <Directory /something/valid/> Order allow,deny Allow from all </Directory> # .... other valid ssl setup </VirtualHost> Here's the strange thing. With this configuration, a prod.domain.com:80 hit succeeds. But a prod.domain.com:443 hit fails, because it finds the dev.domain.com:4443 instead. I've also tried removing the port from the ServerName, but it still doesn't work. Sorry for the long question. Hopefully this is enough information. Thanks in advance for any help.

    Read the article

  • How could I stop ssh offering a wrong key?

    - by Alvaro Maceda
    (This is a problem with ssh, not gitolite) I've configured gitolite on my home server (ubuntu 12.04 server, open-ssh). I want an special identityfile to administer the repositories, so I need to access throught ssh to my own host ussing two different identity keys. This is the content of my .ssh/config file: Host gitadmin.gammu.com User git IdentityFile /home/alvaro/.ssh/id_gitolite_mantra Host git.gammu.com User git IdentityFile /home/alvaro/.ssh/id_alvaro_mantra This is the content of my hosts file: # Git 127.0.0.1 gitadmin.gammu.com 127.0.0.1 git.gammu.com So I should be able to communicate with gitolite this way to access with the "normal" account: $ssh git.gammu.com and this way to access with the administrative account: $ssh gitadmin.gammu.com When I try to access with the normal account, all is ok: alvaro@mantra:~/.ssh$ ssh git.gammu.com PTY allocation request failed on channel 0 hello alvaro, this is gitolite 2.2-1 (Debian) running on git 1.7.9.5 the gitolite config gives you the following access: @R_ @W_ testing Connection to git.gammu.com closed. When I do the same with the administrative account: alvaro@mantra:~$ ssh gitadmin.gammu.com PTY allocation request failed on channel 0 hello alvaro, this is gitolite 2.2-1 (Debian) running on git 1.7.9.5 the gitolite config gives you the following access: @R_ @W_ testing Connection to gitadmin.gammu.com closed. It should show the administrative repository. If I launch ssh with verbose option: ssh -vvv gitadmin.gammu.com ... debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /home/alvaro/.ssh/id_alvaro_mantra (0x7f7cb6c0fbc0) debug2: key: /home/alvaro/.ssh/id_gitolite_mantra (0x7f7cb6c044d0) debug1: Authentications that can continue: publickey,password debug3: start over, passed a different list publickey,password debug3: preferred gssapi-keyex,gssapi-with-mic,publickey,keyboard-interactive,password debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive,password debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /home/alvaro/.ssh/id_alvaro_mantra debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug1: Server accepts key: pkalg ssh-rsa blen 279 ... It's offering the key id_alvaro_mantra, and it should'nt!! The same happens when I specify the key with the -i option: ssh -i /home/alvaro/.ssh/id_gitolite_mantra -vvv gitadmin.gammu.com ... debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /home/alvaro/.ssh/id_alvaro_mantra (0x7fa365237f90) debug2: key: /home/alvaro/.ssh/id_gitolite_mantra (0x7fa365230550) debug2: key: /home/alvaro/.ssh/id_gitolite_mantra (0x7fa365231050) debug1: Authentications that can continue: publickey,password debug3: start over, passed a different list publickey,password debug3: preferred gssapi-keyex,gssapi-with-mic,publickey,keyboard-interactive,password debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive,password debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /home/alvaro/.ssh/id_alvaro_mantra debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug1: Server accepts key: pkalg ssh-rsa blen 279 debug2: input_userauth_pk_ok: fp 36:b1:43:36:af:4f:00:e5:e1:39:50:7e:07:80:14:26 debug3: sign_and_send_pubkey: RSA 36:b1:43:36:af:4f:00:e5:e1:39:50:7e:07:80:14:26 debug1: Authentication succeeded (publickey). ... What the hell is happening??? I'm missing something, but I can't find what. These are the contents of my home dir: -rw-rw-r-- 1 alvaro alvaro 395 nov 14 18:00 authorized_keys -rw-rw-r-- 1 alvaro alvaro 326 nov 21 10:21 config -rw------- 1 alvaro alvaro 137 nov 20 20:26 environment -rw------- 1 alvaro alvaro 1766 nov 20 21:41 id_alvaromaceda.es -rw-r--r-- 1 alvaro alvaro 404 nov 20 21:41 id_alvaromaceda.es.pub -rw------- 1 alvaro alvaro 1766 nov 14 17:59 id_alvaro_mantra -rw-r--r-- 1 alvaro alvaro 395 nov 14 17:59 id_alvaro_mantra.pub -rw------- 1 alvaro alvaro 771 nov 14 18:03 id_developer_mantra -rw------- 1 alvaro alvaro 1679 nov 20 12:37 id_dos_pruebasgit -rw-r--r-- 1 alvaro alvaro 395 nov 20 12:37 id_dos_pruebasgit.pub -rw------- 1 alvaro alvaro 1679 nov 20 12:46 id_gitolite_mantra -rw-r--r-- 1 alvaro alvaro 397 nov 20 12:46 id_gitolite_mantra.pub -rw------- 1 alvaro alvaro 1675 nov 20 21:44 id_gitpruebas.es -rw-r--r-- 1 alvaro alvaro 408 nov 20 21:44 id_gitpruebas.es.pub -rw------- 1 alvaro alvaro 1679 nov 20 12:34 id_uno_pruebasgit -rw-r--r-- 1 alvaro alvaro 395 nov 20 12:34 id_uno_pruebasgit.pub -rw-r--r-- 1 alvaro alvaro 2434 nov 21 10:11 known_hosts There are a bunch of other keys which aren't offered... why id_alvaro_mantra is offered and not the other keys? I can't understand. I need some help, don't know where to look....

    Read the article

  • Troubleshooting unwanted NTP Traffic

    - by Jaxaeon
    A domain controller running Windows Server 2012 is sending NTP and NETBIOS traffic to an address that has never been configured as a time provider. The server logs give no indication that any NTP traffic is failing. The only place I see any evidence of this traffic is in pfSense system logs: (Blocked) Jun 9 08:48:50 DOMAIN 10.0.1.100:123 192.128.127.254:123 UDP (Blocked) Jun 9 08:48:53 DOMAIN 10.0.1.100:137 192.128.127.254:137 UDP As far as I can tell the NTP service is working normally otherwise: DC2.domain.com[10.0.1.101:123]: ICMP: 0ms delay NTP: -0.0131705s offset from DC1.domain.com RefID: DC1.domain.com [10.0.1.100] Stratum: 3 DC1.domain.com *** PDC ***[10.0.1.100:123]: ICMP: 0ms delay NTP: +0.0000000s offset from DC1.domain.com RefID: clock1.albyny.inoc.net [64.246.132.14] Stratum: 2 The time provider NtpClient is currently receiving valid time data from 1.pool.ntp.org,0×1 (ntp.m|0x0|0.0.0.0:123->204.2.134.163:123). The time provider NtpClient is currently receiving valid time data from 0.pool.ntp.org,0×1 (ntp.m|0x0|0.0.0.0:123->64.246.132.14:123). The time service is now synchronizing the system time with the time source 0.pool.ntp.org,0×1 (ntp.m|0x0|0.0.0.0:123->64.246.132.14:123). I've been inside and out of the NTP configuration and cannot find any reason for this traffic. Reverse DNS points the destination address to nothing.attdns.com. pinging nothing.attdns.com from the domain controller in question leads to a response from loopback (127.0.0.2) which makes my head hurt. Any ideas? EDIT1: It should probably be noted that after a dns flush, nslookup 192.128.127.254 returns nothing.attdns.com. 192.128.127.254 is not present in domain.com DNS records. The attdns.com domain is not present in cached lookups. 127.in-addr.arpa is clean of any funkyness. EDIT2: The loopback ping response from nothing.attdns.com is possibly unrelated. Machines on other networks are also displaying this behavior. EDIT3: As mentioned in the comments, I tracked the problem network adapter back to my pfSense VM hosted in esxi 5.5 (I know shame on me for virtualizing a firewall). pfSense was configured to use DC1.domain.com as its primary time provider, but upon changing it back to pool.ntp.org the problem persists. pfSense logs give no indication of NTP misconfiguration. Everywhere I can think to look this VM is identified as 10.0.1.253, so I still have no idea why it’s sending NTP requests as 192.128… Since this firewall was a temporary solution to a problem that no longer exists so I am going to decommission it. EDIT4: The queries were coming from another machine sharing the same virtual adapter as the firewall. The machine has two local adapters: one for LAN, and the other for attached hardware that uses an Ethernet connection. That hardware sits in the the mystery subnet, and the machine is broadcasting NTP requests over both adapters.

    Read the article

  • Nginx + PHP - No input file specified for 1 server block. Other server block works fine

    - by F21
    I am running Ubuntu Desktop 12.04 with nginx 1.2.6. PHP is PHP-FPM 5.4.9. This is the relevant part of my nginx.conf: http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server { server_name testapp.com; root /www/app/www/; index index.php index.html index.htm; location ~ \.php$ { fastcgi_intercept_errors on; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } server { listen 80 default_server; root /www index index.html index.php; location ~ \.php$ { fastcgi_intercept_errors on; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } } Relevant bits from php-fpm.conf: ; Chroot to this directory at the start. This value must be defined as an ; absolute path. When this value is not set, chroot is not used. ; Note: you can prefix with '$prefix' to chroot to the pool prefix or one ; of its subdirectories. If the pool prefix is not set, the global prefix ; will be used instead. ; Note: chrooting is a great security feature and should be used whenever ; possible. However, all PHP paths will be relative to the chroot ; (error_log, sessions.save_path, ...). ; Default Value: not set ;chroot = ; Chdir to this directory at the start. ; Note: relative path can be used. ; Default Value: current directory or / when chroot chdir = /www In my hosts file, I redirect 2 domains: testapp.com and test.com to 127.0.0.1. My web files are all stored in /www. From the above settings, if I visit test.com/phpinfo.php and test.com/app/www, everything works as expected and I get output from PHP. However, if I visit testapp.com, I get the dreaded No input file specified. error. So, at this point, I pull out the log files and have a look: 2012/12/19 16:00:53 [error] 12183#0: *17 FastCGI sent in stderr: "Unable to open primary script: /www/app/www/index.php (No such file or directory)" while reading response header from upstream, client: 127.0.0.1, server: testapp.com, request: "GET / HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "testapp.com" This baffles me because I have checked again and again and /www/app/www/index.php definitely exists! This is also validated by the fact that test.com/app/www/index.php works which means the file exists and the permissions are correct. Why is this happening and what are the root causes of things breaking for just the testapp.com v-host? Just an update to my investigation: I have commented out chroot and chdir in php-fpm.conf to narrow down the problem If I remove the location ~ \.php$ block for testapp.com, then nginx will send me a bin file which contains the PHP code. This means that on nginx's side, things are fine. The problem is that something must be mangling the file paths when passing it to PHP-FPM. Having said that, it is quite strange that the default_server v-host works fine because its root is /www, where as things just won't work for the testapp.com v-host because the root is /www/app/www.

    Read the article

  • ServerAlias not working

    - by Janis Peisenieks
    I have a VPS, that I have configured to host multiple websites with name based hosting. It is all good while only using example.com, and www.example.com. It also works with example.net, but when I try example.net, it reverts to my default site configuration, which just shows my default (empty) index.html page. Here's the default file: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> Here's a configuration for the example.com site: <VirtualHost *:80> ServerAdmin admin@example.com ServerName example.com ServerAlias www.example.com DocumentRoot /srv/www/example.com/public_html/ ErrorLog /srv/www/example.com/logs/error.log CustomLog /srv/www/example.com/logs/access.log combined <Directory /srv/www/example.com/public_html/> AllowOverride All Order allow,deny allow from all </Directory> </VirtualHost> And here is the config for the example.net site: <VirtualHost *:80> ServerAdmin [email protected] ServerName example.net ServerAlias www.example.net DocumentRoot /srv/www/example.net/public_html/ ErrorLog /srv/www/example.net/logs/error.log CustomLog /srv/www/example.net/logs/access.log combined <Directory /srv/www/example.net/public_html/> AllowOverride All Order allow,deny allow from all </Directory> </VirtualHost> Where could the problem be? I believe, that there is something going wrong with the ServerAlias property. Could it be because of the way the site's are built? Because example.com is a Joomla site, and example.net is a Zend Framework site. Just in case, I'll also insert the .htaccess files for example.net, since example.com has it's disabled: example.net: SetEnv APPLICATION_ENV development RewriteRule ^(browse|config).* - [L] ErrorDocument 500 /error-docs/500.shtml SetEnv CACHE_OFFSET 2678400 <FilesMatch "\.(ico|pdf|flv|jpg|jpeg|png|gif|js|css|swf)$"> Header set Expires "Fri, 25 Sep 2037 19:30:32 GMT" Header unset ETag FileETag None </FilesMatch> RewriteEngine On RewriteRule ^(adm|statistics) - [L] RewriteRule ^/public/(.*)$ http://example.net/$1 [R] RewriteRule ^(.*)$ public/$1 [L] Any help would be greatly appreciated! Edit So that my question is ABSOLUTELY clear: The problem is, that one site works with both www prefix as well as without it, and the second one does not. I would like to know how to enable the second site to work with www prefix as well.

    Read the article

  • Troubleshooting sudoers via ldap

    - by dafydd
    The good news is that I got sudoers via ldap working on Red Hat Directory Server. The package is sudo-1.7.2p1. I have some LDAP/Kerberos users in an LDAP group called wheel, and I have this entry in LDAP: # %wheel, SUDOers, example.com dn: cn=%wheel,ou=SUDOers,dc=example,dc=com cn: %wheel description: Members of group wheel have access to all privileges. objectClass: sudoRole objectClass: top sudoCommand: ALL sudoHost: ALL sudoUser: %wheel So, members of group wheel have administrative privileges via sudo. This has been tested and works fine. Now, I have this other sudo privilege set up to allow members of a group called Administrators to perform two commands as the non-root owner of those commands. # %Administrators, SUDOers, example.com dn: cn=%Administrators,ou=SUDOers,dc=example,dc=com sudoRunAsGroup: appGroup sudoRunAsUser: appOwner cn: %Administrators description: Allow members of the group Administrators to run various commands . objectClass: sudoRole objectClass: top sudoCommand: appStop sudoCommand: appStart sudoCommand: /path/to/appStop sudoCommand: /path/to/appStart sudoUser: %Administrators Unfortunately, members of Administrators are still refused permission to run appStart or appStop: -bash-3.2$ sudo /path/to/appStop [sudo] password for Aaron: Sorry, user Aaron is not allowed to execute '/path/to/appStop' as root on host.example.com. -bash-3.2$ sudo -u appOwner /path/to/appStop [sudo] password for Aaron: Sorry, user Aaron is not allowed to execute '/path/to/appStop' as appOwner on host.example.com. /var/log/secure shows me these two sets of messages for the two attempts: Oct 31 15:02:36 host sudo: pam_unix(sudo:auth): authentication failure; logname=Aaron uid=0 euid=0 tty=/dev/pts/3 ruser= rhost= user=Aaron Oct 31 15:02:37 host sudo: pam_krb5[1508]: TGT verified using key for 'host/host.example.com@EXAMPLE.COM' Oct 31 15:02:37 host sudo: pam_krb5[1508]: authentication succeeds for 'Aaron' ([email protected]) Oct 31 15:02:37 host sudo: Aaron : command not allowed ; TTY=pts/3 ; PWD=/auto/home/Aaron ; USER=root ; COMMAND=/path/to/appStop Oct 31 15:02:52 host sudo: pam_unix(sudo:auth): authentication failure; logname=Aaron uid=0 euid=0 tty=/dev/pts/3 ruser= rhost= user=Aaron Oct 31 15:02:52 host sudo: pam_krb5[1547]: TGT verified using key for 'host/host.example.com@EXAMPLE.COM' Oct 31 15:02:52 host sudo: pam_krb5[1547]: authentication succeeds for 'Aaron' ([email protected]) Oct 31 15:02:52 host sudo: Aaron : command not allowed ; TTY=pts/3 ; PWD=/auto/home/Aaron ; USER=appOwner; COMMAND=/path/to/appStop The questions: Does sudo have some sort of verbose or debug mode where I can actually watch it capture the sudoers privilege list and determine whether or not Aaron should have the privilege to run this command? (This question is probably independent of where the sudoers database is kept.) Does sudo work with some background mechanism that might have a log level I could turn up? Right now, I can't fix a problem I can't identify. Is this an LDAP search failure? Is this a group member matching failure? Identifying why the command fails will help me identify the fix... Next step: Recreate the privilege in /etc/sudoers, and see if it works locally... Cheers!

    Read the article

< Previous Page | 152 153 154 155 156 157 158 159 160 161 162 163  | Next Page >