Search Results

Search found 53261 results on 2131 pages for 'system state'.

Page 484/2131 | < Previous Page | 480 481 482 483 484 485 486 487 488 489 490 491  | Next Page >

  • Having a Proactive Patch Plan is the way to Go!

    - by user793553
    BUILDING A SUCCESSFUL PATCHING STRATEGY Make Patching Easy! Having a Patching Strategy for your E-Business Suite system is a great way to manage your system downtime, identify the proper resources needed to perform the necessary task and familiarizing yourself with the Patching Tools in EBS. Having a Proactive Patch Plan is the way to Go! Proactive Patching is a preventive measure allowing you to have a complete patching strategy when applying patches periodically. Oracle provides several tools to help you get started to set the foundation for a solid and proactive patching strategy in Note 313.1 - "Patching & Maintenance Advisor: E-Business Suite 11i and R12". It details all the steps and tooling available for the patching strategy along with the benefits. Among other things it covers the following: How to plan ahead for system downtime Patching Tools in E-Business Suite (Autopatch, OUI, OPatch) How to Identify Patches (RUPs, EBS Family Packs, Critical Patch Updates, etc) How to properly test your patching plan and move to Production Make sure you visit the New E-Business Patching Community! We encourage you to access the "E-Business Patching Community" prior to applying an E-Business Suite patch. Doing so will allow you to explore perspectives shared by industry peers, get real-world experiences with the patch, and benefit from known solutions and lessons learned. Additionally, Oracle Support engineers monitor discussion topics to help provide guidance and solutions for your E-Business Suite patching needs. This is a valuable opportunity to "Get Proactive" with the patching and maintenance of your E-Business Suite environment. Start now, and find fast, proactive resolutions before you begin. Related Articles: What's the Best Way to Patch an E-Business Suite Environment? Patch Wizard Utility

    Read the article

  • Unable to mount USBDRIVE Error creating moint point: Permission denied

    - by steve
    Whenever I plug a usb into my computer a window pops up and says Unable to mount [Name of USB] Error creating moint point: Permission denied steve@goliath:/$ uname -a Linux goliath 3.2.0-32-generic #51-Ubuntu SMP Wed Sep 26 21:33:09 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux steve@goliath:/$ sudo fdisk -l WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sda: 120.0 GB, 120034123776 bytes 255 heads, 63 sectors/track, 14593 cylinders, total 234441648 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0f716ee1 Device Boot Start End Blocks Id System /dev/sda1 1 234441647 117220823+ ee GPT WARNING: GPT (GUID Partition Table) detected on '/dev/sdb'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sdb: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0f710ee1 Device Boot Start End Blocks Id System /dev/sdb1 1 2930277167 1465138583+ ee GPT Disk /dev/sdc: 16.0 GB, 16005464064 bytes 74 heads, 10 sectors/track, 42244 cylinders, total 31260672 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xc3072e18 Device Boot Start End Blocks Id System /dev/sdc1 8064 31260671 15626304 c W95 FAT32 (LBA) steve@goliath:/$ sudo mkdir /media/external mkdir: cannot create directory `/media/external': Permission denied steve@goliath:/$ sudo mkdir /media/usb0 mkdir: cannot create directory `/media/usb0': Permission denied steve@goliath:/$ sudo ls -l / | grep media drwxr-xr-x 3 root root 4096 Oct 3 22:48 media steve@goliath:/$ ls /media/ -a . .. MediaShare MediaShare is the the directory on my server that has all my movies and music. If there is any information I left out please let me know.

    Read the article

  • Unable to install Skype ("Unmet dependencies" error)

    - by Alex Maslakov
    Trying to install skype on Ubuntu 12, I faced and issue. When I type: sudo apt-get update sudo apt-get install skype I get an error Reading package lists... Done Building dependency tree Reading state information... Done skype is already the newest version. You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: skype : Depends: lib32stdc++6 (>= 4.1.1-21) but it is not going to be installed Depends: lib32asound2 (> 1.0.14) but it is not going to be installed Depends: ia32-libs but it is not going to be installed Depends: libc6-i386 (>= 2.7-1) but it is not going to be installed Depends: lib32gcc1 (>= 1:4.1.1-21+ia32.libs.1.19) but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). How do I solve it? Is the way I'm using the right one to install skype? UPDATE: If I try to do sudo apt-get install lib32stdc++6 lib32asound2 ia32-libs libc6-i386 lib32gcc1 skype then I get Reading package lists... Done Building dependency tree Reading state information... Done skype is already the newest version. You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: ia32-libs : Depends: ia32-libs-multiarch lib32asound2 : Depends: libasound2 (= 1.0.25-1ubuntu10) E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution).

    Read the article

  • SQL Server Database Settings

    - by rbishop
    For those using Data Relationship Management on Oracle DB this does not apply, but for those using Microsoft SQL Server it is highly recommended that you run with Snapshot Isolation Mode. The Data Governance module will not function correctly without this mode enabled. All new Data Relationship Management repositories are created with this mode enabled by default. This mode makes SQL Server (2005+) behave more like Oracle DB where readers simply see older versions of rows while a write is in progress, instead of readers being blocked by locks while a write takes place. Many common sources of deadlocks are eliminated. For example, if one user starts a 5 minute transaction updating half the rows in a table, without snapshot isolation everyone else reading the table will be blocked waiting. With snapshot isolation, they will see the rows as they were before the write transaction started. Conversely, if the readers had started first, the writer won't be stuck waiting for them to finish reading... the writes can begin immediately without affecting the current transactions. To make this change, make sure no one is using the target database (eg: put it into single-user mode), then run these commands: ALTER DATABASE [DB] SET ALLOW_SNAPSHOT_ISOLATION ONALTER DATABASE [DB] SET READ_COMMITTED_SNAPSHOT ON Please make sure you coordinate with your DBA team to ensure tempdb is appropriately setup to support snapshot isolation mode, as the extra row versions are stored in tempdb until the transactions are committed. Let me take this opportunity to extremely strongly highly recommend that you use solid state storage for your databases with appropriate iSCSI, FiberChannel, or SAN bandwidth. The performance gains are significant and there is no excuse for not using 100% solid state storage in 2013. Actually unless you need to store petabytes of archival data, there is no excuse for using hard drives in any systems, whether laptops, desktops, application servers, or database servers. The productivity benefits alone are tremendous, not to mention power consumption, heat, etc.

    Read the article

  • Terminal/Putty showing control characters (^M) after updating

    - by jaycee48
    I updated my system yesterday afternoon using the recommended updates from Update Manager. After it completed, I shutdown my system and went home for the day. I come in this morning and I am getting control characters displayed when using vi in both the standard terminal emulator, putty, and putty inside of my Windows virtual that I run with VirtualBox. I have made no other system changes and I cannot figure out how this occurred. It's as if every text file I have was created in DOS. I've searched the forums and I haven't found any answers. I am using xterm as my emulator and I checked with 3 of my coworkers and none of them are having this problem so we do not believe it is a server side issue. Especially since I've checked 3 different servers. There's nothing in my .profile other than PATH variables so I'm using the same terminal settings as everyone else. Some files are fine (I can open and read both /etc/environment and my .profile) but most of any kind of server generated log file is trash. Running cat or head on the same file displays the contents without the characters. Any ideas would be appreciated. Thanks.

    Read the article

  • Ubuntu 64bit Black Screen on Minecraft

    - by Signify
    I have tried posting on the forums, but I really need help (I'm a server admin and really don't want to have to switch to Windows just to run Minecraft). Anyhow, I originally was running openjdk6 as I was told that 7 was unstable and was getting periodical lag spikes while walking (at least once every 3 seconds the screen would freeze for a tenth of a second). After that, I attempted to install Sun's Java JDK7 (I couldn't get ahold of 6 without signing up for Oracle's newsletters). Upon attempting to run Minecraft, I got a black screen after logging in with this error message: 27 achievements 182 recipes Setting user: Thunder7102, -1618112820878091307 Exception in thread "Minecraft main thread" java.lang.UnsatisfiedLinkError: /home/noiro/.minecraft/bin/natives/liblwjgl.so: /home/noiro/.minecraft/bin/natives/liblwjgl.so: wrong ELF class: ELFCLASS32 (Possible cause: architecture word width mismatch) at java.lang.ClassLoader$NativeLibrary.load(Native Method) at java.lang.ClassLoader.loadLibrary1(ClassLoader.java:1939) at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1864) at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1825) at java.lang.Runtime.load0(Runtime.java:792) at java.lang.System.load(System.java:1059) at org.lwjgl.Sys$1.run(Sys.java:69) at java.security.AccessController.doPrivileged(Native Method) at org.lwjgl.Sys.doLoadLibrary(Sys.java:65) at org.lwjgl.Sys.loadLibrary(Sys.java:81) at org.lwjgl.Sys.<clinit>(Sys.java:98) at org.lwjgl.opengl.Display.<clinit>(Display.java:132) at net.minecraft.client.Minecraft.a(SourceFile:184) at net.minecraft.client.Minecraft.run(SourceFile:657) at java.lang.Thread.run(Thread.java:722) Now, this got me fed up, so I tried to install a Windows 7 virtual machine through virtualbox, I gave it 256mb of graphics memory with 2D and 3D acceleration and 3GB of RAM. I installed Java JDK7 for Windows (which does work from experience on my other Windows 7 partition). Once again, a black screen after login. What the heck is going on guys? My System Specs: Ubuntu 12.04 64bit Fully Updated running Gnome3 Nvidia GTS 450 1.3GB OC'd AMD Athlon II 4x 2.8Ghz 6GB of RAM So, what do you think?

    Read the article

  • Can't run minecraft on ubuntu 12.04 lts [duplicate]

    - by user170011
    This question already has an answer here: How to correctly install and troubleshoot Minecraft (Client) 3 answers I was trying to run minecraft on my laptop with ubuntu 12.04 lts 64 bit. I have a lenovo ideapad p580 with 7.7 Gb and an Intel® Core™ i7-3520M CPU @ 2.90GHz × 4 processor. Under the graphics section of the system overview in ubuntu it says I have none installed. My computer comes with and nvidia geforce graphics card but it isnt recognized. When I start minecraft I get this crash report. ---- Minecraft Crash Report ---- // Shall we play a game? Time: 24/06/13 7:23 PM Description: Failed to start game org.lwjgl.LWJGLException: Could not init GLX at org.lwjgl.opengl.LinuxDisplayPeerInfo.initDefaultPeerInfo(Native Method) at org.lwjgl.opengl.LinuxDisplayPeerInfo.<init>(LinuxDisplayPeerInfo.java:52) at org.lwjgl.opengl.LinuxDisplay.createPeerInfo(LinuxDisplay.java:684) at org.lwjgl.opengl.Display.create(Display.java:854) at org.lwjgl.opengl.Display.create(Display.java:784) at org.lwjgl.opengl.Display.create(Display.java:765) at net.minecraft.client.Minecraft.a(SourceFile:235) at avv.a(SourceFile:56) at net.minecraft.client.Minecraft.run(SourceFile:507) at java.lang.Thread.run(Thread.java:679) A detailed walkthrough of the error, its code path and all known details is as follows: -- System Details -- Details: Minecraft Version: 1.5.2 Operating System: Linux (amd64) version 3.5.0-34-generic Java Version: 1.6.0_27, Sun Microsystems Inc. Java VM Version: OpenJDK 64-Bit Server VM (mixed mode), Sun Microsystems Inc. Memory: 406175448 bytes (387 MB) / 514523136 bytes (490 MB) up to 1908932608 bytes (1820 MB) JVM Flags: 2 total; -Xmx2048M -Xms512M AABB Pool Size: 0 (0 bytes; 0 MB) allocated, 0 (0 bytes; 0 MB) used Suspicious classes: No suspicious classes found. IntCache: cache: 0, tcache: 0, allocated: 0, tallocated: 0 LWJGL: 2.4.2 OpenGL: ~~ERROR~~ NullPointerException: null Is Modded: Probably not. Jar signature remains and client brand is untouched. Type: Client (map_client.txt) Texture Pack: Default Profiler Position: N/A (disabled) Vec3 Pool Size: ~~ERROR~~ NullPointerException: null I can run it on different versions of linux such as fedora.

    Read the article

  • Kubuntu 11.10 very slow during file I/O

    - by dko
    After updating to Kubuntu 11.10, my file I/O performance has slowly just gotten worse and worse. It is to the point where I'm getting 1 MB/s write/read speeds to the drive. If I download something, the whole machine becomes unresponsive for at times up to 30 seconds. This usually causes a timeout in the download and the download then stops. Even extracting archive files, while extracting the computer is just unusable on top of the terrible read/write speeds. It isn't the drive as I have Windows installed as well and when I boot to it I have no issues with the drive. I did not have this issue using Kubuntu 11.04 and am thinking of downgrading. However, I'd much rather help out the Ubuntu community by working through these issues. I'm starting to lean towards the new Linux Kernel is just not working well with file handles. During file I/O my system usage does pick up, but it is not 100% CPU usage. My system is as follows. Samsung 2 TB hard disk drive AMD Phenom II x6 1055 4 GB RAM (only one in use according to system monitor) ATI 5850 HD

    Read the article

  • Prefer class members or passing arguments between internal methods?

    - by geoffjentry
    Suppose within the private portion of a class there is a value which is utilized by multiple private methods. Do people prefer having this defined as a member variable for the class or passing it as an argument to each of the methods - and why? On one hand I could see an argument to be made that reducing state (ie member variables) in a class is generally a good thing, although if the same value is being repeatedly used throughout a class' methods it seems like that would be an ideal candidate for representation as state for the class to make the code visibly cleaner if nothing else. Edit: To clarify some of the comments/questions that were raised, I'm not talking about constants and this isn't relating to any particular case rather just a hypothetical that I was talking to some other people about. Ignoring the OOP angle for a moment, the particular use case that I had in mind was the following (assume pass by reference just to make the pseudocode cleaner) int x doSomething(x) doAnotherThing(x) doYetAnotherThing(x) doSomethingElse(x) So what I mean is that there's some variable that is common between multiple functions - in the case I had in mind it was due to chaining of smaller functions. In an OOP system, if these were all methods of a class (say due to refactoring via extracting methods from a large method), that variable could be passed around them all or it could be a class member.

    Read the article

  • Enforcing Constraints Upon Data Documents of Various Formats

    - by Christopher Berman
    This seems like the sort of problem that must have been solved elegantly long ago, but I haven't the foggiest how to google it and find it. Suppose you're maintaining a large legacy system, which has a large collection of data (tens of GB) of various formats, including XML and two different internal configuration formats. Suppose further that there are abstract rules governing the values these files may or may not contain. EXAMPLE: File A defines the raw, mathematical data pertaining to the aerodynamics of a car for consumption of the physics component of the system. File B contains certain values from File A in an easily accessible, XML hierarchy for consumption of a different component of the system. There exists, therefore, an abstract rule (or constraint) such that the values from File B must match the values from File A. This is probably the simplest constraint that can be specified, but in practice, the constraints between files can become very complicated indeed. What is the best method for managing these constraints between files of arbitrary formats, short of migrating it over to an RDBMS (which simply isn't feasible for the foreseeable future)? Has this problem been solved already? To be more specific, I would expect the solution to at least produce notifications of violated constraints; the solution need not resolve the constraints. ============================== Sample file structures File A (JeepWrangler2011.emv): MODEL JeepWrangler2011 { EsotericMathValueX 11.1 EsotericMathValueY 22.2 EsotericMathValueZ 33.3 } File B (JeepWrangler2011.xml): <model name="JeepWrangler2011"> <!--These values must correspond File A's EsotericMathValues--> <modelExtent x="11.1" y="22.2" z="33.3"/> [...] </model>

    Read the article

  • ArchBeat Link-o-Rama for 2012-09-12

    - by Bob Rhubart
    15 Lessons from 15 Years as a Software Architect | Ingo Rammer In this presentation from the GOTO Conference in Copenhagen, Ingo Rammer shares 15 tips regarding people, complexity and technology that he learned doing software architecture for 15 years. Adding a runtime picker to a taskflow parameter in WebCenter | Yannick Ongena Oracle ACE Yannick Ongena shows how to create an Oracle WebCenter popup to allow users to "select items or do more complex things." Oracle Identity Manager 11g R2 Catalog | Daniel Gralewski Oracle Fusion Middleware A-Team blogger Daniel Gralewski shares a detailed overview of the new Catalog feature, one of the most talked about features in the latest release of Oracle Identity Manager 11g. Cloud API and service designers, stop thinking small | Cloud Computing - InfoWorld "The focus must shift away from fine-grained APIs that provide some type of primitive service, such as pushing data to a block of storage or perhaps making a request to a cloud-rooted database," says InfoWorld's David Linthicum. "To go beyond primitives, you must understand how these services should be used in a much larger architectural context. In other words, you need to understand how businesses will employ these services to form real workplace solutions -- inside and outside the enterprise." Oracle Solaris 8 P2V with Oracle database 10.2 and ASM | Orgad Kimchi Orgad Kimchi's technical post illustrates the migration of "a Solaris 8 physical system, with Oracle database version 10.2.0.5 with ASM file-system located on a SAN storage, into a Solaris 8 branded zone inside a Solaris 10 guest domain on top of a Solaris 11 control domain." Thought for the Day "The hardest single part of building a software system is deciding precisely what to build. " — Fred Brooks Source: SoftwareQuotes.com

    Read the article

  • Threading models when talking to hardware devices

    - by Fuzz
    When writing an interface to hardware over a communication bus, communications timing can sometimes be critical to the operation of a device. As such, it is common for developers to spin up new threads to handle communications. It can also be a terrible idea to have a whole bunch of threads in your system, an in the case that you have multiple hardware devices you may have many many threads that are out of control of the main application. Certainly it can be common to have two threads per device, one for reading and one for writing. I am trying to determine the pros and cons of the two different models I can think of, and would love the help of the Programmers community. Each device instance gets handles it's own threads (or shares a thread for a communication device). A thread may exist for writing, and one for reading. Requested writes to a device from the API are buffered and worked on by the writer thread. The read thread exists in the case of blocking communications, and uses call backs to pass read data to the application. Timing of communications can be handled by the communications thread. Devices aren't given their own threads. Instead read and write requests are queued/buffered. The application then calls a "DoWork" function on the interface and allows all read and writes to take place and fire their callbacks. Timing is handled by the application, and the driver can request to be called at a given specific frequency. Pros for Item 1 include finer grain control of timing at the communication level at the expense of having control of whats going on at the higher level application level (which for a real time system, can be terrible). Pros for Item 2 include better control over the timing of the entire system for the application, at the expense of allowing each driver to handle it's own business. If anyone has experience with these scenarios, I'd love to hear some ideas on the approaches used.

    Read the article

  • Best design for a "Command Executer" class

    - by Justin984
    Sorry for the vague title, I couldn't think of a way to condense the question. I am building an application that will run as a background service and intermittently collect data about the system its running on. A second Android controller application will query the system over tcp/ip for statistics about the system. Currently, the background service has a tcp listener class that reads/writes bytes from a socket. When data is received, it raises an event to notify the service. The service takes the bytes, feeds them into a command parser to figure out what is being requested, and then passes the parsed command to a command executer class. When the service receives a "query statistics" command, it should return statistics over the tcp/ip connection. Currently, all of these classes are fully decoupled from each other. But in order for the command executer to return statistics, it will obviously need access to the socket somehow. For reasons I can't completely articulate, it feels wrong for the command executer to have a direct reference to the socket. I'm looking for strategies and/or design patterns I can use to return data over the socket while keeping the classes decoupled, if this is possible. Hopefully this makes sense, please let me know if I can include any info that would make the question easier to understand.

    Read the article

  • How to get started in coding for JBoss

    - by Mister IT Guru
    I have an idea on how to revamp our internal application, after having accessed the needs of the users, addressing thier current issues, and the like. But I am not a coder. My last application I wrote was in college, in C, (java wasn't invented-ish!) and it was a booking system, with the option to add on other modules, blah blah. I got an A, but I became a system administrator instead, more intrested in designing and maintainend networks and infrastructure, but with the advent of virtualisation, and linux management tools such as puppet I can now manage infrastructure in my sleep! Now I want to write code - to put on my infastructure, and I want to build .... a booking system! This is just to get experience, but I am at a loss as to where to start. Setting up the environment, will take me about a day. Writing the spec, even how I want it to work, I already know, but as for actually coding in a decent manner, I can only guess. If anyone can recommend a book, website, blog, twitter person to follow, or just advice on how to build a kick butt basic jboss app, then please, "I AM READY TO LEARN" :)

    Read the article

  • Ubuntu installation does not recognize drive partinioning

    - by Woltan
    I have a 1TB drive and installed Windows 7 on a 128GB partition. When I now try to install Ubuntu 11.04 it does not recognize the Windows partition but offers the complete 1TB drive to install Ubuntu on instead. It displays: However, in the Ubuntu Disk Utility the Windows partitions are recognized. What do I need to do in order for Ubuntu to recognize the Windows 7 partition and install Ubuntu as a dual boot? Response to comments The following commands were executed and the results are shown below: fdisk -l WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sda: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x34a38165 Device Boot Start End Blocks Id System /dev/sda1 * 1 13 102400 7 HPFS/NTFS Partition 1 does not end on cylinder boundary. /dev/sda2 13 16318 130969600 7 HPFS/NTFS Disk /dev/sdb: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x14a714a6 Device Boot Start End Blocks Id System /dev/sdb1 1 60801 488384001 83 Linux parted -l Warning: Unable to open /dev/sr0 read-write (Read-only file system). /dev/sr0 has been opened read-only. Error: /dev/sr0: unrecognised disk label

    Read the article

  • Never Bet Against the Impossible

    - by BuckWoody
    My uncle used to say “If a man tells you that his car squirts milk in his eye when you lift the hood, don’t bet against that. You’ll end up with milk in your eye.” My friend Allen White tells me this is taken from a play (and was said about playing cards), but I think the sentiment holds, even in database work. I mentioned the other day that you should allow the other person to talk and actively listen before you propose a solution. Well, I saw a consultant “bet against the impossible”  the other day – and it bit her. She explained to the person telling her the problem that the situation simply couldn’t exist that way, and he proceeded to show her that it did. She got silent, typed a few things, muttered a little, and then said “well, must be something else.” She just couldn’t admit she was wrong. So don’t go there. If someone explains a problem to you with their database, listen with purpose, and then explore the troubleshooting steps you know to find the problem. But keep your absolutes to yourself. In fact, I have a friend that has recently sent me one of those. He connects to a system with SQL Server Management Studio (SSMS) version 2008 (if I recall correctly) and it shows a certain version number of the target system in the connection tab. Then he connects to it using SSMS 2008 R2 and gets a different number. Now, as far as I know, we didn’t change the connection string information, and that’s provided by the target system, so this is impossible. But I won’t tell him that. Not until I look a little more. :) Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • OOP implementation of BUFFS and Stats. Suggestion

    - by Mattia Manzo Manzati
    I am developing an MMORPG server using NodeJS. I am not sure how to implement Buffs, i mean, equipped objects or used skills have effects on the Player() which has many Stats(), some of them have a max cap... Effects can change the Stat value, increasing or decreasing it by a value, a percentage or completly rewrite the value of the stat. After a while I have decided to create a base class for buffs, which can be hidden (if they are casted from an equipped object) or shown if they came from an ability (Spell). Anyway I need suggestion how to implement it, use an array for all active buffs for a stat and have a function calculate the value of the stat affected by buffs each time I need the value of the stat or...? Other more OOP's ways to do it? I have read this What's a way to implement a flexible buff/debuff system? but this implements only a percentage system, which buffs can only say "+10%, +20%, etc...", but I would love to have an hybrid system, which can have percentage values or static values (like WoW does), and using modifiers it's hard to implement, because modifiers refers to the current value of stat :/ Thanks for suggestions :)

    Read the article

  • Oracle Solutions supporting ICAM deployments

    - by user12604761
    The ICAM architecture has become the predominant security architecture for government organizations.  A growing number of federal, state, and local organizations are in various stages of using Oracle ICAM solutions.  The relevance of ICAM has clearly extended beyond the Federal ICAM mandates to any government program that must enable standards based interoperability like health exchanges and public safety.  The state government endorsed version of ICAM was just released with the NASCIO SICAM Roadmap. ICAM solutions require an integrated security architecture.  The major new release in August of Oracle Identity Management 11gR2 focuses on a platform approach to identity management.  This makes it easier for government organizations to acquire and implement a comprehensive ICAM solution, rather than individual products.  The following analysts reports describe the value of the Oracle Solutions: According to The Aberdeen Group:  “Organizations can save up to 48% deploying a platform of  (identity management) solutions when compared to deploying point solutions” IDC Product Flash, July 2012:  “Oracle may have hit the home run grand slam in identity management recently with the announcement of Oracle Identity Management 11g R2." For additional information on the Oracle ICAM solutions, attend the Webcast on October 10, 2012:  ICAM Framework for Enabling Agile, Service Delivery. Visit the Oracle Secure Government Resource Center for information on enterprise security solutions that help government safeguard information, resources and networks.

    Read the article

  • Cannot Boot, How to recover

    - by Kendor
    Am running 11.10 64-bit with Gnome-shell. Something happened late Friday whereby my machine never gets to the login screen. I do get to an Ubuntu splash logo, after that I get a text screen that it hangs on. The screen is referring to issues with mounting various network resources, including VMWare and also some references to my NAS that are in fstab. If I hit "esc" I can get to the GRUB menu and into recovery console. If I try to do a file system check, I run into a similar error screen that I see when trying to boot normally. A possible clue here is that during my last good session I made some mods to the /etc/hosts file to reference another system which I'm connecting to with Synergy. I don't believe I have hardware issues as I'm able to boot properly with a Live USB and connect to my network/Internet. A few more tidbits. I have regular Dejadups backups on my NAS. I have a good Clonezilla whole drive image which is 4-6 weeks old.. My home is encrypted. I thought I'd try blowing away my hosts file via live USB, but when I mounted the hard drive everything was read-only and I couldn't figure out how to replace it. P.S. I logged in via CLI and modded the host file to remove the entry I'd made, to no avail. System continue to gets stuck on the following: CIFS VFS: default security mechanism requested. The default security mechanism will be upgraded from ntlm to ntlmv2 in kernel version 3.1s Would love some sober advice on how to attack this.

    Read the article

  • Release: Oracle Java Development Kit 8, Update 20

    - by Tori Wieldt
    Java Development Kit 8, Update 20 (JDK 8u20) is now available. This latest release of the Java Platform continues to improve upon the significant advances made in the JDK 8 release with new features, security and performance optimizations. These include: new enterprise-focused administration features available in Oracle Java SE Advanced; products offering greater control of Java version compatibility; security updates; and a very useful new feature, the MSI compatible installer. Download Release Notes Java SE 8 Documentation New tools, features and enhancements highlighted from JDK 8 Update 20 are: Advanced Management Console The Java Advanced Management Console 1.0 (AMC) is available for use with the Oracle Java SE Advanced products. AMC employs the Deployment Rule Set (DRS) security feature, along with other functionality, to give system administrators greater and easier control in managing Java version compatibility and security updates for desktops within their enterprise and for ISVs with Java-based applications and solutions. MSI Enterprise JRE Installer Available for Windows 64 and 32 bit systems in the Oracle Java SE Advanced products, the MSI compatible installer enables system administrators to provide automated, consistent installation of the JRE across all desktops in the enterprise, free of user interaction requirements. Performance: String de-duplication resulting in a reduced footprint Improved support in G1 Garbage Collection for long running apps. A new 'force' feature in DRS (Deployment Rule Set) which allows system administrators to specify the JRE with which an applet or Java Web Start application will run. This is useful for legacy applications so end users don't need to approve security exceptions to run.  Java Mission Control 5.4 with new ease-of-use enhancements and launcher integration with Eclipse 4.4 JavaFX on ARM Nashorn performance improvement by persisting bytecode after inital compilation There's much more information to be found in the JDK 8u20 Release Notes.

    Read the article

  • Why do I get disk I/O errors booting the 3.2 kernel on a xen vps server?

    - by Doug
    I have a xen vps, which I just upgraded to the new LTS 12 Precise Pangolin. However, I see this error on booting: [ 12.848076] end_request: I/O error, dev xvda, sector 12841 [ 12.848093] end_request: I/O error, dev xvda, sector 12841 [ 12.848103] Buffer I/O error on device xvda1, logical block 1605 [ 12.848110] lost page write due to I/O error on xvda1 [ 12.848129] Aborting journal on device xvda1. Results in / being mounted read-only. Reboot: [ 3.087257] EXT3-fs (xvda1): warning: ext3_clear_journal_err: Marking fs in need of filesystem check. [ 3.087677] EXT3-fs (xvda1): recovery complete [ 3.088514] EXT3-fs (xvda1): mounted filesystem with ordered data mode Begin: Running /scripts/local-bottom ... done. done. Begin: Running /scripts/init-bottom ... done. fsck from util-linux 2.20.1 PRGMRDISK1 contains a file system with errors, check forced. Checking disk drives for errors. This may take several minutes. Press C to cancel all checks in progress PRGMRDISK1: ***** REBOOT LINUX ***** PRGMRDISK1: 371152/6001184 files (2.8% non-contiguous), 4727949/12000000 blocks mountall: fsck / [308] terminated with status 3 mountall: System must be rebooted: / [ 151.566949] Restarting system. Name ID Mem VCPUs State Time(s) shadowmint 236 2048 1 --p--- 0.0 Reboot - back to 1. This is definitely an issue with the 3.2 kernel, because booting the 3.0.0 or 2.6.38 kernel series make this issue magically disappear. I'm certain this is some kind of weird xen thing, but no idea. Anyone? Anyhow, until this is resolve I strongly recommend against upgrading if you're running a xen server.

    Read the article

  • Which of these URL scenarios is best for big link menus? [seo /user friendly urls]

    - by Sam
    Hi folks, a question about urls... me and a good friend of mine are exploring the possibilities of either of the three scenarios for a website where each webpage has a menusystem with about 130 links.: SCENARIO 1 the pages menu system has SHORT non-descriptive hyperlinks as well as a SHORT canonical: <a href:"design">dutch design</a> the pages canonical url points to e.g.: "design" OR SCENARIO 2 the pages menu system has SHORT non-descriptive hyperlinks wwith LONG canonical urls: <a href="design">dutch design</a> the pages canonical url points to: dutch-design-crazy-yes-but-always-honest OR SCENARIO 3 the pages menu system has LONG descriptive hyperlinks with LONG canonical urls: <a href="dutch-design-crazy-yes-but-always-honest">dutch design</a> the pages canonical url points to: dutch-design-crazy-yes-but-always-honest Currently we have scenario 2... should we progress to scenario 3? All three work fine and point via RewriteMod to the same page which is fetched underwater. Now, my question is which of these is better in terms of: userfriendlyness (page loading times, full url visible in url bar or not) seo friendlyness (proper indexing due to the urls containing descriptive relevant tags) other concerns we forgot like possible penalties for so many words in link hrefs?? Thanks very much for your suggestions: much appreciated!

    Read the article

  • Is chroot the right choice for my use case?

    - by Anthony
    Backstory: I am working on setting up a MineCraft server and want to allow admins to have ssh access to the MineCraft server console and appropriate mc server files, but not the whole system. The console provided by the minecraft server is only available to the user that launched the process. In addition, the admins will need terminal access to some basic cli tools such as wget, cp, mv, rm, and a text editor. Plan: I have already setup the ssh aspect of things, requiring pre-shared keys and whatnot. Setup a jailed environment in which all user activity will be contained. Setup user accounts. - The first user account will be the minecraft user. The minecraft user will start the MC server in a multiuser screen session and allow the other admins to attach to it. - Subsequent users should have their own /home directory for normal usage. Setup acl for the appropriate files to allow each user to edit the mc server files. No one will be doing system updates, nor will anyone be installing any programs, so I'll be the only user with sudo. The Issues: I don't want the ssh users to have access to the whole system. Users will still need to use wget or curl to update the mc server files. Is chroot the right tool for this use case, or is there something more appropriate for the job? I have no experience setting up a chroot environment and have found several tools to aid in this process. Jailkit seems to be the most robust, but it's not in the standard repos.

    Read the article

  • Enterprise Instrumentation: The 'sessionName' parameter of value 'TraceSession' is not valid

    - by Michael Freidgeim
    We are still using Enterprise Instrumentation(that was created during .Net 1.1 time)In new Server 2008 environment and IIS 7 we have the following errors:The 'sessionName' parameter of value 'TraceSession' is not valid. A trace session of this name does not exist in the TraceSessions configuration file for Windows Trace Session Manager service. Ensure that a session of this name exists in the TraceSessions configuration file and that the Windows Trace Session Manager service is started.   at Microsoft.EnterpriseInstrumentation.EventSinks.TraceEventSink..ctor(IDictionary parameters, EventSource eventSource)   --- End of inner exception stack trace ---   at System.RuntimeMethodHandle._InvokeConstructor(IRuntimeMethodInfo method, Object[] args, SignatureStruct& signature, RuntimeType declaringType)   at System.Reflection.RuntimeConstructorInfo.Invoke(BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture)   at System.RuntimeType.CreateInstanceImpl(BindingFlags bindingAttr, Binder binder, Object[] args, CultureInfo culture, Object[] activationAttributes)   at Microsoft.EnterpriseInstrumentation.EventSinks.EventSink.CreateNewEventSinks(DataRow[] eventSinkRows, EventSource eventSource)I’ve seen the same errors on development Win7 machines when using IIS. It seems not a problem on Cassini.I've checked ,that Windows Trace Session Manager Service has started and The file C:\Program Files (x86)\Microsoft Enterprise Instrumentation\Bin\Trace Service\TraceSessions.config has corresponding entry<?xml version="1.0" encoding="utf-8" ?><configuration >                <defaultParameters minBuffers="4" maxFileSize="10" maxBuffers="25" bufferSize="20" logFileMode="sequential" flushTimer="3" />                <sessionList>                                 <session name="TraceSession" enabled="false" fileName="C:\Program Files (x86)\Microsoft Enterprise Instrumentation\Bin\Trace Service\Logs\TraceLog.log" />                </sessionList></configuration>The errors still continue, but I was able to disable  the parameter in  eventSink configuration   <eventSink name=" traceSink" description=" Outputs events to the Windows Event Trace." type ="Microsoft.EnterpriseInstrumentation.EventSinks.TraceEventSink ">                <!-- MNF disabled parameter to  avoid error "The 'sessionName' parameter of value 'TraceSession' is not valid"                      < parameter name ="sessionName " value ="TraceSession " />                    -->    </ eventSink>Related old post http://bytes.com/topic/net/answers/104761-enterprise-instrumentation-windows-trace-session-managerOne day I wish to replace all EnterpriseInstrumentation calls with NLog.

    Read the article

  • Ubuntu 12.04 slow boot on ASUS, attached with dmesg and bootchart

    - by stanleyhunk
    I heard that Ubuntu can boot up around 30sec, but I take more than 60sec every time my Ubuntu boot. I also read some forum said need to post the dmesg and bootchart to identify which process slowing down the booting time, as I'm not expert in Ubuntu and wish to learn more about it, I humbly ask any pro here to teach me how. My laptop specs: Model : ASUS K45VS RAM : 8MB CPU : Intel(R) Core(TM) i7-3630QM CPU @ 2.40GHz x 8 Graphic Card : nVidia GeForce GT 645M HDD : 750GB OS : Single boot Ubuntu 12.04LTS System.uname : Linux 3.8.0-39-generic #58~precise1-Ubuntu SMP Fri May 2 21:33:40 UTC 2014 x86_64 System.release : Ubuntu 12.04.4 LTS System.kernel.options : BOOT_IMAGE=/boot/vmlinuz-3.8.0-39-generic root=UUID=c8a71503-bce8-406c-9a5f-5aa8284f5c7c ro quiet splash My dmesg (which highlighted to the huge time frame gap): [ 30.772656] cgroup: libvirtd (1961) created nested cgroup for controller "memory" which has incomplete hierarchy support. Nested cgroups may change behavior in the future. [ 30.772659] cgroup: "memory" requires setting use_hierarchy to 1 on the root. [ 30.772683] cgroup: libvirtd (1961) created nested cgroup for controller "devices" which has incomplete hierarchy support. Nested cgroups may change behavior in the future. [ 30.772710] cgroup: libvirtd (1961) created nested cgroup for controller "blkio" which has incomplete hierarchy support. Nested cgroups may change behavior in the future. [ 32.140335] nvidia 0000:01:00.0: irq 46 for MSI/MSI-X [ 32.505619] ACPI Error: Field [TMPB] at 1081344 exceeds Buffer [ROM1] size 262144 (bits) (20121018/dsopcode-236) [ 32.505624] ACPI Error: Method parse/execution failed [\_SB_.PCI0.PEG0.PEGP._ROM] (Node ffff880224e91f00), AE_AML_BUFFER_LIMIT (20121018/psparse-537) [ 802.034422] audit_printk_skb: 69 callbacks suppressed [ 802.034428] type=1400 audit(1400914804.392:35): apparmor="DENIED" operation="capable" parent=1 profile="/usr/sbin/cupsd" pid=1683 comm="cupsd" pid=1683 comm="cupsd" capability=36 capname="block_suspend" [ 1581.300901] type=1400 audit(1400915584.816:36): apparmor="DENIED" operation="capable" parent=1 profile="/usr/sbin/cupsd" pid=1683 comm="cupsd" pid=1683 comm="cupsd" capability=36 capname="block_suspend" My Bootchart.png: Looking forward to learn to improve both my booting time and knowledge. Thanks in advance :)

    Read the article

< Previous Page | 480 481 482 483 484 485 486 487 488 489 490 491  | Next Page >