Search Results

Search found 24353 results on 975 pages for 'test coverage'.

Page 583/975 | < Previous Page | 579 580 581 582 583 584 585 586 587 588 589 590  | Next Page >

  • LAN access via USB from Ipod Touch?

    - by Alec
    I need to browse the local web server from my Ipod Touch to test apps we're developing. I'm not allowed to install a separate wireless access point which would be the easiest solution. Can I use the USB cable for this? Also, the local PC is a Dell Mini 9 running Ubuntu. Has anyone managed to use the wireless port to create an ad hoc connection to an Ipod Touch so the Ipod can browse the Ubuntu web server? This would be an alternate option for me. Thank you!

    Read the article

  • Problems installing icinga-web

    - by Kungurov
    I'm using Ubuntu 10.04 LTS (64bit, Server), Apache 2.2.14 Following the instruction from the oficial icinga page http://docs.icinga.org/latest/en/index.html I installed the icinga-web-1.7.1 on my machine and configured a few hosts for test purposes. The Classic Interface runs as expected but the new Web Interface does not show any data. When I try: ps aux | grep ido2db | grep -v grep I get: icinga 27425 0.0 0.0 41464 600 ? Ss Jul27 0:00 /usr/local/icinga/bin/ido2db -c /usr/local/icinga/etc/ido2db.cfg which might indicate a problem with idomod/ido2db because according to the docs there should be at least 2 processes greped. Any ideas how to fix that?

    Read the article

  • how to make a very large radmind image, faster

    - by Wang
    Making the new Snow Leopard radmind image* for my lab involves manipulating over 50GB of applications, including passing them over the network. Each try takes four hours or more, and if it fails there's no apparent way to pick up again from partway through. Instead I have to delete all the data (a minutes-long operation) and start over. Furthermore, one successful upload just means now I get to test whether the image works; if it doesn't, I can look forward to repeating the upload as many times as I need to troubleshoot. How can I do this faster and/or smarter?

    Read the article

  • Squid - Selective reverse proxy and forward proxy

    - by Dean Smith
    I'd like to setup a squid instance to do selective reverse proxy for a configured list of URLs whilst acting as a normal forward proxy for everything else. We are building new infrastructure, parallel live as it where, and I want to have a proxy that people can use that will force selective traffic into the new platform whilst just acting as a forward proxy for anything else. This makes it very easy for people/systems to test the portions of the new platform we want without having to change too much, just use a proxy address. Is such a setup possible ?

    Read the article

  • subversion: enforce TLS

    - by Daniel Marschall
    Hello, I am running subversion on a Debian Squeeze system with Apache2 and mod_dav for viewing the contents with a webbrowser. I want to enforce the usage of TLS, so that the login data and the SVN contents cannot be read from the connection. I have tried following: <Location /svn> DAV svn SVNParentPath /daten/subversion/ # our access control policy AuthzSVNAccessFile /daten/subversion/access_control # try anonymous access first, resort to real # authentication if necessary. Satisfy Any Require valid-user # how to authenticate a user AuthType Basic AuthName "Subversion repository" AuthUserFile /daten/subversion/.htpasswd # Test SSLRequireSSL RewriteEngine On RewriteCond %{SERVER_PORT} !443 RewriteRule ^svn/(.)$ https://www.viathinksoft.de/svn/$1 [R,L] </Location> at file /etc/apache2/conf.d/subversion.conf Alas, this does not work. There is no redirect and there is still a HTTP request working at /svn/(projectname)/(somefolder) . This SSL-enforce-policy should work for - viewing the contents with webbrowser - retrieve contents with TurtoiseSVN client - committing contents with TurtoiseSVN client Can you please help me? Regards Daniel Marschall

    Read the article

  • Supporting and testing multiple versions of a software library in a Maven project

    - by Duncan Jones
    My company has several versions of its software in use by our customers at any one time. My job is to write bespoke Java software for the customers based on the version of software they happen to be running. I've created a Java library that performs many of the tasks I regularly require in a normal project. This is a Maven project that I deploy to our local Artifactory and pull down into other Maven projects when required. I can't decide the best way to support the range of software versions used by our customers. Typically, we have about three versions in use at any one time. They are normally backwards compatible with one another, but that cannot be guaranteed. I have considered the following options for managing this issue: Separate editions for each library version I make a separate release of my library for each version of my company software. Using some Maven cunningness I could automatically produce a tested version linked to each of the then-current company software versions. This is feasible, but not without its technical challenges. The advantage is that this would be fairly automatic and my unit tests have definitely executed against the correct software version. However, I would have to keep updating the versions supported and may end up maintaining a large collection of libraries. One supported version, but others tested I support the oldest software version and make a release against that. I then perform tests with the newer software versions to ensure it still works. I could try and make this testing automatic by having some non-deployed Maven projects that import the software library, the associated test JAR and override the company software version used. If those projects build, then the library is compatible. I could ensure these meta-projects are included in our CI server builds. I welcome comments on which approach is better or a suggestion for a different approach entirely. I'm leaning towards the second option.

    Read the article

  • Need to re-build an application - how?

    - by Tom
    For our main system, we have a small monitor application that sits outside our network and periodically tries to log in to verify the system still works. We have a problem with the monitor though in that the communications component set (Asta 3 inside Delphi applications) doesn't always connect through. Overall, I'd say it's about 95% reliable, but that other 5% kills the monitor since it will try to log in and hang on the connection attempt (no timeout in the component). This really isn't an issue on the client side of the system since the clients don't disconnect and reconnect repeatedly on the same application instance, but I need a way to make sure the monitor stays up and continues working even when the component fails on a run. I have a few ideas as to which way to have the program run, the main idea being to put the communications inside a threaded data module so that if one thread crashes then another thread can test later and the program keep going. Does this sound like a valid way to go? Any other ideas how to ensure a reliable monitoring application with a less than 100% reliable component? Thanks. P.S. Not sure these tags are the most appropriate. Tried including "system-reliability" as one, but not high enough rep to create.

    Read the article

  • Lexmark E240 printing issues

    - by NoamH
    I have Lexmark E240 laser printer. I have been using it with 12.04 (32bit) for 2 years with no significant issues. Since lexmark does not support this printer on linux, I used alternatives drivers that were suggested by the community, such as HP-lasterjet, E238, generic PS, etc. They all worked fine, more or less. After upgrading to 14.04 (64bit fresh install) I tried to configure the printer as before, but now I have problems. The test page is ok, but when printing, most of the times, the first page in the document will be printed very partially and in 300% zoom. Next page might be ok. If I turn off the printer and back on, the first page might be ok, but in the next print job, it will be broken again. I used all the above printer options. Same results. I did NOT install the lexmark drivers since they are intended for 12.04 and the package manager report that it is in "bad quality" (don't know why). Does anyone has any experience with this printer in 14.04 64bit ?

    Read the article

  • Calling a .NET C# class from XSLT

    - by HanSolo
    If you've ever worked with XSLT, you'd know that it's pretty limited when it comes to its programming capabilities. Try writing a for loop in XSLT and you'd know what I mean. XSLT is not designed to be a programming language so you should never put too much programming logic in your XSLT. That code can be a pain to write and maintain and so it should be avoided at all costs. Keep your xslt simple and put any complex logic that your xslt transformation requires in a class. Here is how you can create a helper class and call that from your xslt. For example, this is my helper class:  public class XsltHelper     {         public string GetStringHash(string originalString)         {             return originalString.GetHashCode().ToString();         }     }   And this is my xslt file(notice the namespace declaration that references the helper class): <?xml version="1.0" encoding="UTF-8" ?> <xsl:stylesheet  xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0" xmlns:ext="http://MyNamespace">     <xsl:output method="text" indent="yes" omit-xml-declaration="yes"/>     <xsl:template  match="/">The hash code of "<xsl:value-of select="stringList/string1" />" is "<xsl:value-of select="ext:GetStringHash(stringList/string1)" />".     </xsl:template> </xsl:stylesheet>   Here is how you can include the helper class as part of the transformation: string xml = "<stringList><string1>test</string1></stringList>";             XmlDocument xmlDocument = new XmlDocument();             xmlDocument.LoadXml(xml);               XslCompiledTransform xslCompiledTransform = new XslCompiledTransform();             xslCompiledTransform.Load("XSLTFile1.xslt");               XsltArgumentList xsltArgs = new XsltArgumentList();                        xsltArgs.AddExtensionObject("http://MyNamespace", Activator.CreateInstance(typeof(XsltHelper)));               using (FileStream fileStream = new FileStream("TransformResults.txt", FileMode.OpenOrCreate, FileAccess.ReadWrite, FileShare.ReadWrite))             {                 // transform the xml and output to the output file ...                 xslCompiledTransform.Transform(xmlDocument, xsltArgs, fileStream);                            }

    Read the article

  • Why does my VertexDeclaration apparently not contain Position0?

    - by Phil
    I'm trying to get my code from calling each individual draw call down to using at least a VertexBuffer, and preferably an indexBuffer, but now that I'm attempting to test my code, I'm getting the error: The current vertex declaration does not include all the elements required by the current vertex shader. Position0 is missing. Which makes absolutely no sense to me, as my VertexDeclaration is: public readonly static VertexDeclaration VertexDeclaration = new VertexDeclaration( new VertexElement(0, VertexElementFormat.Vector3, VertexElementUsage.Position, 0), new VertexElement(sizeof(float) * 3, VertexElementFormat.Color, VertexElementUsage.Color, 0), new VertexElement(sizeof(float) * 3 + 4, VertexElementFormat.Vector3, VertexElementUsage.Normal, 0) ); Which clearly contains the information. I am attempting to draw with the following lines: VertexBuffer vb = new VertexBuffer(GraphicsDevice, VertexPositionColorNormal.VertexDeclaration, c.VertexList.Count, BufferUsage.WriteOnly); IndexBuffer ib = new IndexBuffer(GraphicsDevice, typeof(int), c.IndexList.Count, BufferUsage.WriteOnly); vb.SetData<VertexPositionColorNormal>(c.VertexList.ToArray()); ib.SetData<int>(c.IndexList.ToArray()); GraphicsDevice.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, vb.VertexCount, 0, c.IndexList.Count/3); Where c is a Chunk class containing an 8x8x8 array of boxes. Full code is available at https://github.com/mrbaggins/Box/tree/ProperMeshing/box/box. Relevant locations are Chunk.cs (Contains the VertexDeclaration) and Game1.cs (Draw() is in Lines 230-250). Not much else of relevance to this problem anywhere else. Note that large commented sections are from old version of drawing.

    Read the article

  • wamp - Changing PHP version stops server running

    - by James Connor
    I downloaded wamp which works (green icon). However I need to test a site locally in Joomla 1.5 which caused errors when using php 5.3 I believe I need a PHP version lower then this i.e 5.2.x To do this I have gone through PHP - Version - Get More.. and installed a older PHP version. However when I start this PHP version the icon stays on an orange colour and going in localhost doesn't work. I haven't used wamp before so my knowledge of it is limited. If anyone could point me in the right direction it would be greatly appreciated, Regards.

    Read the article

  • Subversion hooks no longer running

    - by Chris Lieb
    I don't know when this started happening, but, for some reason, none of my Subversion hooks are running anymore. I am running Subversion 1.6.9 on a Gentoo Linux machine, which has had its hooks work in the past. I am running Subversion through the svn_dav module for Apache2.2. I modified the hook scripts that I make use of to write into a file in the /tmp directory owned by apache:apache whenever they are executed, but after making a commit, there is nothing in the file that should be written to. The scripts are executable and owned by apache:apache, so I don't think that is the issue. Here is one of my test scripts (post-commit.sh) that isn't getting executed: #!/bin/sh /bin/echo post-commit >> /tmp/z_test exit 0 After running a commit, I expect both the pre-commit.sh and post-commit.sh hooks to be run, but neither of them appear to be writing into the desired file (/tmp/z_test). What's going on?

    Read the article

  • Read/WRITE/Verify disk diagnostic tool for Mac OS X?

    - by Spiff
    It seems that there are many tools out there for Mac OS X that test a hard drive for bad blocks by doing a Read/Verify pass. That is, they read a block, then read it a second time, and verify that both reads yielded the same results. I need a tool that does a non-destructive Read/Write/Verify pass. It should read each block, write those same contents back out, and then read it again to verify. That way every block gets written, giving the hard drive a chance to spare out bad blocks. But since the same contents that were just read get written back out, it doesn't destroy data that wasn't already lost. I'm aware of several tools that can do Read/Verify, but I'm not aware of any that do Read/Write/Verify. Are there any tools that do what I want? Unix / open source tools that compile and run on Mac OS X are fair game too.

    Read the article

  • Sonicwall NSA 3500, public ip for SSL VPN clients is not visible

    - by SlyMcFly
    I have a Sonciwall NSA 3500 and I'm setting up the SSL VPN according to this guide. I get through setting up the Sonicwall router, but then to test it says "Users can now go to the public IP of the sonicwall. Notice the new “click here for SSL login” hyper link". However, when I go to the public ip of the Sonicwall I don't get a web page, it just times out. Is there some other setting that I'm missing in order to make the SSL VPN login page public?

    Read the article

  • Is it possible to tunnel ICMP over TCP?

    - by Robert Atkins
    I don't want to tunnel TCP over ICMP (as ptunnel does), I want to go the other way around. I'm in the situation where I have TCP (HTTP) connectivity to a machine but an internal firewall over which I have no control is swallowing pings. The monitoring software I'm using appears to determine connectivity by attempting to send a ping before it tries to just connect to the web service on the target machine. It's failing this ping test and giving up. I believe if I could fool my monitoring software into thinking pings were getting through, it would then connect to the web service and be on its merry way. Anyone know how I can do this? I have SSH and root access on the destination machine.

    Read the article

  • How to extract jpegs from a video file using ffmpeg

    - by Andrew Simpson
    I am using C# and ffmpeg. In this scenario I have 279 individual jpegs and i have used ffmpeg to create a AVI file from these images on my client. CMD Line: -f image2 -r 10 -i "C:\000EC902F17F\img%05d.jpg" -s 352x288 -y "C:\1\test.avi" I then upload to my server. CMD Line: -i c:\1\1.avi c:\1\img-%05d.jpg I then extract jpegs from the AVI file. I get 265 jpegs back. Obviously ffmpeg is dropping these frames (most probable) when the avi is 1st created. Is there a way to 'force' to encode using ALL the images I have? Thanks. PS I did not specify any command line option other than the size of the video output. As far as I am aware if none are specified then ffmpeg automatically chooses the best ones?

    Read the article

  • Javascript Canvas Drawing Efficiency

    - by jujumbura
    I have just recently started some experiments with game development in Javascript/HTML5, and so far it has been going pretty well. I have a simple test scene running with some basic input handling, and a hundred-ish drawImage() calls with a few transforms. This all runs great on Chrome, but unfortunately, it already chugs on Firefox. I am using a very large canvas ( 1920 x 1080 ), but it doesn't seem like I should be hitting my limit already. So on that note, I was hoping to ask a few questions: 1) What exactly is done on the CPU vs. the GPU in terms of canvas and drawImage()? I'm afraid the answer is probably "it depends on the browser", but can anybody give me some rules of thumb? I naively imagined that each drawImage call results in a textured quad on the GPU with the canvas effectively being a render target, but I'm wondering if I'm pretty far off base there... 2) I have seen posts here and there with people saying not to use the translate(), rotate(), scale() functions when drawing on the canvas. Am I adding a lot of overhead just by adding a translate() call, as opposed to passing in the x,y to drawImage()? Some people suggest using "transate3d", etc., which are CSS properties, but I'm not sure how to use them within a scene. Can they be used for animated sprites within a single canvas? 3) I have also seen a lot of posts with people mentioning that pre-building canvases and then re-using them is a lot faster than issuing all the individual draw calls again. I am guessing that my background should definitely be pre-built into a canvas, but how far should I take this? Should I maintain an individual canvas for each sprite, to cache all static image data when not animating? Thank you much for your advice!

    Read the article

  • How to Deal with an out of touch "Project manager"

    - by Joe
    This "manager" is 70+ yrs old and a math genius. We were tasked with creating a web application. He loves SQL and stored procedures. He first created this in MS access. For the web app I had to take his DB migrate to SQL server. His first thought was to have a master stored procedure with a WAITFOR Handling requests from users. I eventually talked him out of that and use asp.net mvc. Then eventually use the asp.net membership. Now the web app is a mostly handles requests from the pages that is passed to stored procedures. It is all stored procedure driven. The business logic as well. Now we are having an one open DB connection per user logged in plus 1. I use linq to sql to check 2 tables and return the values thats it period. So 25 users is a load. He complains why my code is bad cause his test driver stored procedure simulates over 100 users with no issue. What are the best arguments for not having the business logic not all in stored procedures?? How should I deal with this?? I am giving an abbreviated story of course. He is a genius part owner of the company all the other owners trust him because he is a genius. and quoting -"He gets things done. old school".

    Read the article

  • Custom daemon script: works, but does not run at boot / startup

    - by pearjoint
    this is Ubuntu 10.10 Maverick. I have the following shell script in init.d that I want to run as a "daemon" (background service with start/stop/restart really) at system startup. There is a symlink in rc3.d. I tried 4 and 5 too. (Ideally this would initialize before graphical login happens and before a user logs in.) IMPORTANT: the script works 100% as expected and required when testing this with service MetaLeapDaemon start and service MetaLeapDaemon stop. (This shell script calls a Python program which makes sure the appropriate .pid files are both created at startup and deleted at exit.) So generally it works fine but now my only issue is why it will not be run at any of the run-levels I tried. I know for sure it isn't run because the log file it normally creates does not get created. As you can see (by the lack of any uid:gid args in the start-stop-daemon commands) this would currently run only under root, is this forbidden in a default setup? Here's the script, pretty much your run-off-the-mill daemon script really: #! /bin/sh DAEMON=/opt/metaleap/_core/daemon/MetaLeapDaemon.py NAME=MetaLeapDaemon DESC="MetaLeapDaemon" test -f $DAEMON || exit 0 set -e case "$1" in start) start-stop-daemon --start --pidfile /var/run/$NAME.pid --exec $DAEMON ;; stop) start-stop-daemon --stop --pidfile /var/run/$NAME.pid ;; restart) start-stop-daemon --stop --pidfile /var/run/$NAME.pid sleep 1 start-stop-daemon --start --pidfile /var/run/$NAME.pid --exec $DAEMON ;; *) N=/etc/init.d/$NAME echo "Usage: $N {start|stop|restart}" >&2 exit 1 ;; esac exit 0

    Read the article

  • OpenGL + Allegro. Moving from software drawing X Y to openGL is confusing

    - by Aaron
    Having a fair bit of trouble. I'm used to Allegro and drawing sprites on a bitmap buffer at X Y coords. Now I've started a test project with OpenGL and its weird. Basically, as far as I know, theirs many ways to draw stuff in OpenGL. At the moment, I think I'm creating a Quad? Whatever that is, and I think Ive given it a texture of a bitmap and them im drawing that: GLuint gl_image; bitmap = load_bitmap("cat.bmp", NULL); gl_image = allegro_gl_make_texture_ex(AGL_TEXTURE_MASKED, bitmap, GL_RGBA); glBindTexture(GL_TEXTURE_2D, gl_image); glBegin(GL_QUADS); glColor4ub(255, 255, 255, 255); glTexCoord2f(0, 0); glVertex3f(-0.5, 0.5, 0); glTexCoord2f(1, 0); glVertex3f(0.5, 0.5, 0); glTexCoord2f(1, 1); glVertex3f(0.5, -0.5, 0); glTexCoord2f(0, 1); glVertex3f(-0.5, -0.5, 0); glEnd(); So yeah. So I got a few questions: Is this the best way of drawing a sprite? Is it suitable? The big question: Can anyone help / Does anyone know any tutorials on this weird coordinate thing? If it even is that. It's vastly different from XY, but I want to learn it. I was thinking maybe I could learn how this weird positioning stuff works, and then write a function to try and translate it to X and Y coords. Thats about it. I'm still trying to figure it all out on my own but any contributions you guys can make would be greatly appreciated =D Thanks!

    Read the article

  • Regulation of the software industry

    - by Flexo
    Every few years someone proposes tighter regulation for the software industry. This IEEE article has been getting some attention lately on the subject. If software engineers who write programs for systems that expose the public to physical or financial risk knew they would be tested on their competence, the thinking goes, it would reduce the flaws and failures in code—and maybe save a few lives in the bargain. I'm skeptical about the value and merit of this. To my mind it looks like a land grab by those that proposed it. The quote that clinches that for me is: The exam will test for basic knowledge, not mastery of subject matter because the big failures (e.g. THERAC-25) seem to be complex, subtle issues that "basic knowledge" would never be sufficient to prevent. Ignoring any local issues (such as existing protections of the title Engineer in some jurisdictions): The aims are noble - avoid the quacks/charlatans1 and make that distinction more obvious to those that buy their software. Can tighter regulation of the software industry ever achieve it's original goal? 1 Exactly as regulation of the medical profession was intended to do.

    Read the article

  • McAfee VirusScan Enterprise or avast! Free?

    - by Pieter
    I currently have McAfee VirusScan Enterprise on my computer. This was preinstalled on my PC. (My university did a bulk laptop purchase so I got a sweet deal on my laptop. McAfee was one of the extras that were included.) Apparently, it's getting bad ratings from sites such as Virus Bulletin and AV-Test. Am I better off with avast's free antivirus? Is it worth considering avast! Internet Security? I currently have a three-year license for VirusScan Enterprise. I keep my software up to date using Secunia PSI and I don't click on any suspicious links.

    Read the article

  • Storage product testing

    - by wildchild
    hello, I know this is out of place (being an active member here i am coming for the help from seniors) ,but i need some information regarding storage testing ,testing of Raid arrays, SCSI, SAS ,SATA and also test carried out on fabric manager(Cisco MDS series switches). I am aware that this is an administrative forum and i would really appreciate if you could direct me to the correct forum ar links where i can learn things . @ moderators-Sorry for posting at the wrong place,i would be deleting this as soon as i get the help. Thanks !

    Read the article

  • Another Exchange 2003 to Exchange 2010 mail flow issue

    - by Ryan Roussel
    During a migration recently, we came across another internal mail routing issue.  The symptoms were identical to my previous post about Exchange internal mail routing.  Mail was flowing from 2010 to 2003, from 2010 to the internet, but not from 2003 to 2010.   I went through the normal check list looking at permissions, DNS, and the routing group connectors.  I verified that both servers listed in the routing group connectors were the routing master in their respective routing groups through the 2003 ESM.  I also verified that inheritable permissions were enabled for the Exchange 2003 server object in the schema.  No luck with either.   For my previous post about this issue in which inheritable permissions were the culprit: Exchange 2010, Exchange 2003 Mail Flow issue   And for Routing Group issues: Exchange 2007 Routing Group Connector Mayhem   I finally enabled logging on the SMTP virtual server on Exchange 2003 and the Default Receive Connector on 2010 and sent a few test e-mails where I found 2003 was having issues authenticating to 2010.  By default 2003 uses Exchange Server Authentication to communicate to 2010. The exact error was: 4.7.0 Temporary Authentication Failure which was found in the SMTP logs on the Exchange 2003 side   After scouring based on this error, I found the solution:   The Access this computer from the network user rights in the local computer policy on the Exchange 2010 server were changed from the default.  The network administrator had modified the Default Domain policy and changed this user right assignment to only list Domain Users.   The fix was to clear this setting in the Default Domain policy,  force gpupdate to refresh the group policy settings, then ensure the appropriate users and groups were listed.   This immediately fixed the problem and the Exchange 2003 server was able to route mail to the Exchange 2010 mailboxes.   The default user rights assignments for Access this computer from the network On Workstations and Servers: Administrators Backup Operators Power Users Users Everyone On Domain Controllers: Administrators Authenticated Users Everyone More can be found here: http://technet.microsoft.com/en-us/library/cc740196(WS.10).aspx

    Read the article

  • Having a Proactive Patch Plan is the way to Go!

    - by user793553
    BUILDING A SUCCESSFUL PATCHING STRATEGY Make Patching Easy! Having a Patching Strategy for your E-Business Suite system is a great way to manage your system downtime, identify the proper resources needed to perform the necessary task and familiarizing yourself with the Patching Tools in EBS. Having a Proactive Patch Plan is the way to Go! Proactive Patching is a preventive measure allowing you to have a complete patching strategy when applying patches periodically. Oracle provides several tools to help you get started to set the foundation for a solid and proactive patching strategy in Note 313.1 - "Patching & Maintenance Advisor: E-Business Suite 11i and R12". It details all the steps and tooling available for the patching strategy along with the benefits. Among other things it covers the following: How to plan ahead for system downtime Patching Tools in E-Business Suite (Autopatch, OUI, OPatch) How to Identify Patches (RUPs, EBS Family Packs, Critical Patch Updates, etc) How to properly test your patching plan and move to Production Make sure you visit the New E-Business Patching Community! We encourage you to access the "E-Business Patching Community" prior to applying an E-Business Suite patch. Doing so will allow you to explore perspectives shared by industry peers, get real-world experiences with the patch, and benefit from known solutions and lessons learned. Additionally, Oracle Support engineers monitor discussion topics to help provide guidance and solutions for your E-Business Suite patching needs. This is a valuable opportunity to "Get Proactive" with the patching and maintenance of your E-Business Suite environment. Start now, and find fast, proactive resolutions before you begin. Related Articles: What's the Best Way to Patch an E-Business Suite Environment? Patch Wizard Utility

    Read the article

< Previous Page | 579 580 581 582 583 584 585 586 587 588 589 590  | Next Page >