Search Results

Search found 18319 results on 733 pages for 'reference parameters'.

Page 394/733 | < Previous Page | 390 391 392 393 394 395 396 397 398 399 400 401  | Next Page >

  • Emacs stops taking input when a file has changed on disk [migrated]

    - by recf
    I'm using Emacs v24.3.1 on Windows 8. I had a file change on disk while I had an Emacs buffer open with that file. As soon as I attempt to make a change to the buffer, a message appears in the minibuffer. Fileblah.txt changed on disk; really edit the buffer? (y, n, r or C-h) I would expect to be able to hit r to have it reload the disk version of the file, but nothing happens. Emacs completely stops responding to input. None of the listed keys work, nor do any other keys as far as I can tell. I can't C-g out of the minibuffer. Alt-F4 doesn't work, not does Close window from the task bar. I have to kill the process from task manager. Anyone have any idea what I'm doing wrong here? In cases it's various modes not playing nice with each other, for reference, my init.el is here. Nothing complex. Here's the breakdown: better-defaults (ido-mode, remove menu-bar, uniquify buffer `forward, saveplace) recentf-mode custom frame title visual-line-mode require final newline and delete trailing whitespace on save Markdown mode with auto-mode-alist Flyspell with Aspell backend Powershell mode with auto-mode-alist Ruby auto-mode-alist Puppet mode with auto-mode-alist Feature (Gherkin) mode with auto-mode-alist The specific file was a markdown file with Github-flavored Markdown mode and Flyspell mode enabled.

    Read the article

  • Cannot connect to wireless network

    - by smr
    I am quite new to using Ubuntu 12.10. All I remember while installing Ubuntu 12.10 was that I didn't check any box specifically to enable wireless connectivity. I tried looking into the various questions posted here - most of them refer to certain commands (redundantly sudo) and things of that sort. I am quite new and wasn't sure of understanding what those commands were trying to do (the network configuration settings - showed up different options, like 'Wireless', 'Wired', 'IPv4', 'IPv6' etc -- I am not sure of what sort of settings are to be made - like for example what sort of 'WEP' settings etc are to be made. Although I wasn't able to connect to the wireless network, I was able to connect to the internet, once I plugged in the ethernet cable. Can someone help me with: 1) Understandng a way to see if Ubuntu is configured to connect to a wireless network. 2) If so - Getting to configure to a wireless network (setting up a new one). 3) Where in ubuntu, should I look for seeing if there are any hardware devices (such as the Broadcom wireless adapter or things of that sort - like the way we use the 'Device manager' in Windows to look into the adapter settings). 4) Reference material to learn and understand the various ubuntu commands. (things like the -lshw etc).

    Read the article

  • Goal =&gt; Microsoft Certification Exams for .NET 4

    - by Raghuraman Kanchi
    Microsoft Learning has announced the availability dates for .NET 4.0 Exams from 2nd July 2010 onwards. Being a MCSD.NET for 2005, My friend and I decided to skip certification exams for 2008 exams aiming towards MCST/MCPD coz we felt it was a mere layer on the top of .NET 2.0. But not so for .NET 4.0. We see .NET 4.0 as a major overhaul with the best of .NET releases in hand. The following exams and their link have been posted below for direct reference. Exam 71-511, TS: Windows Applications Development with Microsoft .NET Framework 4 Exam 71-515, TS: Web Applications Development with Microsoft .NET Framework 4 Exam 71-513: TS: Windows Communication Foundation Development with Microsoft .NET Framework 4 Exam 71-516: TS: Accessing Data with Microsoft .NET Framework 4 Exam 71-518: Pro: Designing and Developing Windows Applications Using Microsoft .NET Framework 4 Exam 71-519: Pro: Designing and Developing Web Applications Using Microsoft .NET Framework 4 I am planning to prepare for each of this exam along with my .NET 4.0 work. Ever since the release of VS 2010 beta 1, I have been working majorly on .NET 4.0 including Silverlight & MEF. Now it is the time to read documentation, prepare notes, understand material, do labs, write programs and application and pass the exam.

    Read the article

  • Spring Cleaning

    - by Tim Dexter
    I recently got a shiny new laptop; moving my shiz from old to new, was not the nightmare it used to be. I have gotten into the habit of using a second hard drive in the media bay where the CDROM normally sits. That drive contains my life's work with BIP. I can pull it out and plug it into another machine very easily. I have been sorting through some old directories and files, archiving some, sharing others with colleagues. For instance, a little dated but if you were looking for a list of Publisher reports available in EBS R12.1, here it is. Im trying to track down a more recent R12 instance and will re-post the document. I also found another gem; its a little out there in terms of usefulness but Im sharing it none the less. You can embed, locally or remotely reference SVG graphics (in XML format) and bring the images into the BIP outputs. Template and sample data here. A nice set of templates showing page number control and page suppression - they will need some explanation, so I'll save them for another post. The list goes on but I'll save them for later. Back to the clean up!

    Read the article

  • Layers - Logical seperation vs physical

    - by P.Brian.Mackey
    Some programmers recommend logical seperation of layers over physical. For example, given a DL, this means we create a DL namespace not a DL assembly. Benefits include: faster compilation time simpler deployment Faster startup time for your program Less assemblies to reference Im on a small team of 5 devs. We have over 50 assemblies to maintain. IMO this ratio is far from ideal. I prefer an extreme programming approach. Where if 100 assemblies are easier to maintain than 10,000...then 1 assembly must be easier than 100. Given technical limits, we should strive for < 5 assemblies. New assemblies are created out of technical need not layer requirements. Developers are worried for a few reasons. A. People like to work in their own environment so they dont step on eachothers toes. B. Microsoft tends to create new assemblies. E.G. Asp.net has its own DLL, so does winforms. Etc. C. Devs view this drive for a common assembly as a threat. Some team members Have a tendency to change the common layer without regard for how it will impact dependencies. My personal view: I view A. as silos, aka cowboy programming and suggest we implement branching to create isolation. C. First, that is a human problem and we shouldnt create technical work arounds for human behavior. Second, my goal is not to put everything in common. Rather, I want partitions to be made in namespaces not assemblies. Having a shared assembly doesnt make everything common. I want the community to chime in and tell me if Ive gone off my rocker. Is a drive for a single assembly or my viewpoint illogical or otherwise a bad idea?

    Read the article

  • Pure functional programming and game state

    - by Fu86
    Is there a common technique to handle state (in general) in a functional programming language? There are solutions in every (functional) programming language to handle global state, but I want to avoid this as far as I could. All state in a pure functional manner are function parameters. So I need to put the whole game state (a gigantic hashmap with the world, players, positions, score, assets, enemies, ...)) as a parameter to all functions which wants to manipulate the world on a given input or trigger. The function itself picks the relevant information from the gamestate blob, do something with it, manipulate the gamestate and return the gamestate. But this looks like a poor mans solution for the problem. If I put the whole gamestate into all functions, there is no benefit for me in contrast to global variables or the imperative approach. I could put just the relevant information into the functions and return the actions which will be taken for the given input. And one single function apply all the actions to the gamestate. But most functions need a lot of "relevant" information. move() need the object position, the velocity, the map for collision, position of all enemys, current health, ... So this approach does not seem to work either. So my question is how do I handle the massive amount of state in a functional programming language -- especially for game development?

    Read the article

  • Query for server DefaultData & DefaultLog folders

    - by jamiet
    Do you ever need to query for the DefaultData & DefaultLog folders for your SQL Server instance? Well, I just did and the following script enabled me to do that: DECLARE @HkeyLocal NVARCHAR(18),@MSSqlServerRegPath NVARCHAR(31),@InstanceRegPath SYSNAME; SELECT @HkeyLocal=N'HKEY_LOCAL_MACHINE' SELECT @MSSqlServerRegPath=N'SOFTWARE\Microsoft\MSSQLServer' SELECT @InstanceRegPath=@MSSqlServerRegPath + N'\MSSQLServer' DECLARE @SmoDefaultFile NVARCHAR(512) EXEC MASTER.dbo.xp_instance_regread @HkeyLocal, @InstanceRegPath, N'DefaultData', @SmoDefaultFile OUTPUT DECLARE @SmoDefaultLog NVARCHAR(512) EXEC MASTER.dbo.xp_instance_regread @HkeyLocal, @InstanceRegPath, N'DefaultLog', @SmoDefaultLog OUTPUT SELECT ISNULL(@SmoDefaultFile,N'') AS [DefaultFile],ISNULL(@SmoDefaultLog,N'') AS [DefaultLog]' I haven’t done any rigorous testing or anything like that, all I can say is…it worked for me (on SQL Server 2012). Use as you see fit. Doubtless this information exists in a multitude of other places but nevertheless I’m putting it here so I know where to find it in the future. Just for fun I thought I’d try this out against SQL Azure Windows Azure SQL Database. Unsurprisingly it didn’t work there: Msg 40515, Level 15, State 1, Line 16 Reference to database and/or server name in 'MASTER.dbo.xp_instance_regread' is not supported in this version of SQL Server. @Jamiet

    Read the article

  • Calling a webservice via Javascript

    - by jeroenb
    If you want to consume a webservice, it's not allways necessary to do a postback. It's even not that hard! 1. Webservice You have to add the scriptservice attribute to the webservice. [System.Web.Script.Services.ScriptService]public class PersonsInCompany : System.Web.Services.WebService { Create a WebMethod [WebMethod] public Person GetPersonByFirstName(string name) { List<Person> personSelect = persons.Where(p => p.FirstName.ToLower().StartsWith(name.ToLower())).ToList(); if (personSelect.Count > 0) return personSelect.First(); else return null; } 2. webpage Add reference to your service to your scriptmanager <script type="text/javascript"> function GetPersonInCompany() { var val = document.getElementById("MainContent_TextBoxPersonName"); PersonsInCompany.GetPersonByFirstName(val.value, FinishCallback); } function FinishCallback(result) { document.getElementById("MainContent_LabelFirstName").innerHTML = result.FirstName; document.getElementById("MainContent_LabelName").innerHTML = result.Name; document.getElementById("MainContent_LabelAge").innerHTML = result.Age; document.getElementById("MainContent_LabelCompany").innerHTML = result.Company; } </script> Add some javascript, where you first call your webservice. Classname.Webmethod = PersonsInCompany.GetPersonByFirstName Add a callback to catch the result from the webservice. And use the result to update your page. <script type="text/javascript"> function GetPersonInCompany() { var val = document.getElementById("MainContent_TextBoxPersonName"); PersonsInCompany.GetPersonByFirstName(val.value, FinishCallback); } function FinishCallback(result) { document.getElementById("MainContent_LabelFirstName").innerHTML = result.FirstName; document.getElementById("MainContent_LabelName").innerHTML = result.Name; document.getElementById("MainContent_LabelAge").innerHTML = result.Age; document.getElementById("MainContent_LabelCompany").innerHTML = result.Company; } </script>   If you have any question, feel free to contact me! You can download the code here.

    Read the article

  • LDoms with Solaris 11

    - by Orgad Kimchi
    Oracle VM Server for SPARC (LDoms) release 2.2 came out on May 24. You can get the software, see the release notes, reference manual, and admin guide here on the Oracle VM for SPARC page. Oracle VM Server for SPARC enables you to create multiple virtual systems on a single physical system.Each virtual system is called alogical domain and runs its own instance of Oracle Solaris 10 or Oracle Solaris 11. The version of the Oracle Solaris OS software that runs on a guest domain is independent of the Oracle Solaris OS version that runs on the primary domain. So, if you run the Oracle Solaris 10 OS in the primary domain, you can still run the Oracle Solaris 11 OS in a guest domain, and if you run the Oracle Solaris 11 OS in the primary domain, you can still run the Oracle Solaris 10 OS in a guest domain In addition to that starting with the Oracle VM Server for SPARC 2.2 release you can migrate guest domain even if source and target machines have different processor type. You can migrate guest domain from a system with UltraSPARC  T2+ or SPARC T3 CPU to a system with a SPARC T4 CPU.The guest domain on the source and target system must run Solaris 11 In order to enable cross CPU migration.In addition to that you need to change the cpu-arch property value on the source system. For more information about Oracle VM Server for SPARC (LDoms) with Solaris 11 and  Cross CPU Migration refer to the following white paper

    Read the article

  • Where will the image be saved? [closed]

    - by Dummy Derp
    import java.awt.Dimension; import java.awt.Rectangle; import java.awt.Robot; import java.awt.Toolkit; import java.awt.image.BufferedImage; import javax.imageio.ImageIO; import java.io.File; ... public void captureScreen(String fileName) throws Exception { Dimension screenSize = Toolkit.getDefaultToolkit().getScreenSize(); Rectangle screenRectangle = new Rectangle(screenSize); Robot robot = new Robot(); BufferedImage image = robot.createScreenCapture(screenRectangle); ImageIO.write(image, "png", new File(fileName)); } ... Going over this code I found on the internet. I got everything except the part where file is created. In what format file name should be? Should it be C:/myFolder/myImage.png" or just myImage.png and where will it be saved? Here is what docs say: File public File(String pathname) Creates a new File instance by converting the given pathname string into an abstract pathname. If the given string is the empty string, then the result is the empty abstract pathname. Parameters: pathname - A pathname string Throws: NullPointerException - If the pathname argument is null

    Read the article

  • What are the memory-management capabilities of MySQL + JDBC (in light of autonomic computing)?

    - by Adel
    I'm interested in implementing some kind of autonomic-computing functionality using MySQL. By autonomic-computing I mean roughly some failsafe abilities, whereby the application appears to be at least slightly "intelligent" For reference, the main parts of autonomic computing we'd like are the "self-configuring" and "self-healing" features (the other two - "self-optimizing" and "self-protecting", are too abstract/futuristic for us, at this time). Sofor example, if we have a sample Java application that utilizes a MySQL database, we might want to automatically restart the MySQL database if we take up too much memory. Or maybe we want to have the ability to dynamiccally adjust the database memory as needed. So for example, when we start the application the database begins with a 56 Megabyte buffer; but then as we insert so many rows we want to have it automatically jump up to 512 MB, then to 1024, until a max of 4096 MB. Does all of the above suggest that MySQL is too "weak" for the task? Do you suggest using Oracle database? My professor believes that by using Java we can basically make up for any memory-management deficiencies that MySQL has in relation to Oracle DB. I'm new to MySQL , but have experience with Oracle. If all of the above sounds wishy-washy, it is because I'm still fleshing it out. thanks

    Read the article

  • Dynamic Monitoring Service (DMS) Configuration Dumping and CPU Utilization

    - by ShawnBailey
    There was recently a report of CPU spikes on a system that were occuring at precise 3 hour intervals. Research revealed that the spikes were the result of the Dynamic Monitoring Service generating a metrics dump and writing it under the server 'logs' folder for every WLS server in the domain. This blog provides some information on what this is for and how to control it. The Dynamic Monitoring Service is a facility in FMw (JRF to be more precise) that collects runtime data on the components deployed to WebLogic. Each component is responsible for how much or how little they use the service and SOA collects a fair amount of information. To view what is collected on any running server you can use the following URL, http://host:port/dms/Spy and login with admin credentials. DMS is essentially always running and collecting this information in the runtime and to protect against loss of this data it also runs automatic backups, by default at the 3 hour interval mentioned above. Most of the management options for DMS are exposed through WLST but these settings are not so we must open the dms_config.xml file which can be found in DOMAIN_HOME/config/fmwconfig/servers/<server_name>/dms_config.xml. The contents are fairly short and at the bottom you will find the following entry: <dumpConfiguration>     <dump intervalSeconds="10800" maxSizeMBytes="75" enabled="true"/> </dumpConfiguration> The interval of 10800 seconds corresponds to the 3 hours and the maximum size is 75MB. The file is written as an archive to DOMAIN_HOME/servers/<server_name>/logs/metrics. This archive contains the dump in XML format. You can disable the dumps all together by simply setting the 'enabled' value to 'false' or of course you could modify the other parameters to suit your needs. Disabling the dumps will NOT impact DMS collections or display at runtime. It will only eliminate these periodic backups.

    Read the article

  • Working with Google Webmaster Tools

    - by com
    My first question is about Crawl errors in Google Webmaster Tools. Crawl errors is devided into few sections. One of them is HTTP. I assume that all broken links in HTTP was somehow found by crawler, this is not the links from sitemap. If this was found by scanning all sitemap pages for links, why it doesn't mention what was the source page, like in sitemap section with column Linked From. And what the meaning of Linked From, I thought if the name of section is sitemap, therefore all URLs should be taken from sitemap, so why there is Linked From? The second question, what is the best way to trreat searching on the site. How come the searching result page are getting indexed? Because of the fact that all searching result page are getting indexed, I have to many page in Linked From. What's the right practice? Question three: In order to improve response time in WMT, can I redirect all crawler's requests to designated free web server? Is this good practice? Question four: How should I treat Google Analytics Code (with parameters PageView, PageLoadTime), in the case user request non existing page, should I render Google code or not? Right now I use Google Analytics Code on the common template page, such that every page, also non existing page with error message contains Google Analytics Code, it seems like it has influence on WMT.

    Read the article

  • TFS API The All New Team Project Picker &ndash; Beautiful!

    - by Tarun Arora
    The Team Project Picker in TFS 2011 looks gorgeous. I specially like the status bar on the working state, at least let’s you know that the project picker is still working on getting the details and of course the new icons for team project collection and team projects are stunning too.     How do I get the Team Project Picker using the TFS API? That is fairly straight forward. Add a reference to the Microsoft.TeamFoundation.Client dll available in C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\ReferenceAssemblies\v2.0 and use the below code, public void ConnectToTfs() { TeamProjectPicker tfsPP = new TeamProjectPicker(TeamProjectPickerMode.MultiProject, false, new UICredentialsProvider()); tfsPP.ShowDialog(); }   Download a sample application here Why does my project picker look different? You might run into an issue, where the project picker looks like the below, When the Team Project Picker is run from inside of VS the colour theme will be picked up from VS itself. When running outside of VS the windows theme colours are used, so there can be differences between the two. Currently there isn’t a way to change that since the dialog itself is not public (just the wrapper that launches the dialog). So don’t be surprised if the Team Project Picker looks different then expected :-]

    Read the article

  • Understanding interfaces [closed]

    - by user985482
    Possible Duplicate: When to use abstract classes instead of interfaces and extension methods in C#? Why are interfaces useful? What is the point of an interface? What other reasons are there to write interfaces rather than abstract classes? What is the point of having every service class have an interface? Is it bad habit not using interfaces? I am reading Microsoft Visual C# 2010 Step by Step which I feel it is a very good book on introducing you to the C# language. I have just finished reading a chapter on interfaces and although I understood the syntax of creating and using interfaces I have trouble of understanding the point on why should I use them? Correct me If I am wrong but in an interface you can only declare methods names and parameters.The body of the method should be declared in the class that inherits the interface. So in this case why should I declare an interface if I am going to declare the entire method in the class that inherits that interface? What is the point? Does this have something to do with the fact that a class can inherit multiple interfaces?

    Read the article

  • Testing my model for hybrid scheduling in Embedded Systems

    - by markusian
    I am working on a project for school, where I have to analyze the performances of a few fixed-priority servers algorithms (polling server, deferrable server, priority exchange) using a simulator in the case of hybrid scheduling, where we have both hard periodic tasks and soft aperiodic tasks. In my model I consider that: the hard tasks have a period equal to their deadline, with a known worst case execution time (wcet). The actual execution time could be smaller than the wcet. the soft tasks have a known wcet and random interarrival times. The actual execution time could be smaller than the wcet. In order to test those algorithms I need realistic case studies. For this reason I'm digging in the scientific literature but I am facing different problems: Sometimes I find a list of hard tasks with wcet, but it is not specified how the soft tasks parameters are found. Given the wcet of a task, how can I model its actual execution time? This means, what random distribution should I use considering the wcet? How can I model the random interarrival times of soft aperiodic tasks?

    Read the article

  • ECS with Go - circular imports [migrated]

    - by Andreas
    I'm exploring both Go and Entity-Component-Systems. I understand how ECS works, and I'm trying to replicate what seems to be the go-to document of ECS, namely http://cowboyprogramming.com/2007/01/05/evolve-your-heirachy/ For performance, the document recommends to use static arrays of every component type. That is, not arrays of component interfaces (arrays of pointers). The problem with this in Go is circular imports. I have one package, ecs, which contains the definitions for Entity, Component and System types/interfaces as well as an EntityManager. Another package, ecs/components, contains the various components. Obviously, the ecs/components package depends on ecs. But, to declare arrays of specific components in EntityManager, ecs would depend on ecs/components, therefore creating a circular import. Is there any way of avoiding this? I am aware that normally a high level system should not depend on lower systems. I'm also want to point out that using an array of pointers is probably fast enough for my purposes, but I'm interested in possible workarounds (for future reference) Thank you for your help!

    Read the article

  • Installation questions

    - by user12609425
    I've gotten a couple more questions about the installation process for Ops Center. "Can I install on any SPARC / X86 based platform?" Ops Center can run on Oracle Solaris on either architecture, or on Linux. The Certified Systems Matrix lists the supported OSes, and the Linux and Solaris install guides go into more detail about the hardware and OS requirements. "Can we install it on local zones or LDOMS?" Zones, yes; LDOMS, sort of. You can install the Enterprise Controller in a local zone. There are a few caveats, which are explained in the Preparing a Non-Global Zone section. You can also install a Proxy Controller in an Oracle Solaris 11 zone. Agent Controllers, which are the part of the infrastructure that's installed on managed systems, can be put on zones or LDOMS. "Do we need any dedicated network ports from all agent monitoring systems?"  Yes. The port requirements are covered in the Network Port Requirements and Protocols table, which is in the feature reference guide as well as in the install guides.

    Read the article

  • Making The EBS Upgrade From 11.5.10 Easier - Part II

    - by Annemarie Provisero
    ADVISOR WEBCAST: Making The EBS Upgrade From 11.5.10 Easier - Part II PRODUCT FAMILY: E-Business Suite July 12, 2011 at 8 am PT, 9 am MT, 11 am ET This one-hour session is recommended for technical users who are responsible for upgrading their E-Business Suite applications from Release 11.5.10 to Release 12.1.x. As you begin your upgrade process, there are a number of tools available to assist you in a successful upgrade. A successful upgrade requires careful planning, correct upgrade processing, detailed testing, and user (re)training prior to upgrade. Over three sessions we will discuss the tools that you can use to assist in your upgrade tasks. These tools are available to you via My Oracle Support and as part of the E-Business Suite product offerings. In In this second session, we’ll cover the following topics: Recap of Part I Detailed Look at Maintenance Wizard Detailed Look at Patch Wizard A short, live demonstration and question and answer period will be included. In the first part of the three-session series, we covered the following topics: Overview of Tools Available for Upgrading Upgrade versus Re-implementing Upgrade Community Upgrade Product Information Center Page Detailed Look at Upgrade Advisor A replay of that session is available via Note 740964.1,  Advisor Webcast Archive. A third session will be presented on July 19, 2011 to review best practices for using the upgrade tools. A short, live demonstration (only if applicable) and question and answer period will be included. Oracle Advisor Webcasts are dedicated to building your awareness around our products and services. This session does not replace offerings from Oracle Global Support Services. Click here to register for this session ------------------------------------------------------------------------------------------------------------- The above webcast is a service of the E-Business Suite Communities in My Oracle Support. For more information on other webcasts, please reference the Oracle Advisor Webcast Schedule.Click here to visit the E-Business Communities in My Oracle Support Note that all links require access to My Oracle Support.

    Read the article

  • Need a Quick Sure Method to Produce a Formatted Explain Plan? This will help!

    - by user702295
    Please use the following on the production machine to get formatted explain plan and sql trace using the SLOW sql (e.g. 'T_COMB_LIST.COMB_ID = 216') or any other value that takes longer: -- Open new session is SQL*Plus */ -- Make sure you are using updated PLAN_TABLE -- This can be done by dropping it and recreate it by running: -- SQL> @?/rdbms/admin/utlxplan.sql) set lines 1000 set pages 1000 spool xplan_1.txt EXPLAIN PLAN FOR <<<<Replace this line with exactly the same query you used above. Force hard parse by modifying the case of a character>>>> @?/rdbms/admin/utlxplp spool off EXIT --Open a second session is SQL*Plus ALTER SESSION SET max_dump_file_size = unlimited; ALTER SESSION SET tracefile_identifier = '10046'; ALTER SESSION SET statistics_level = ALL; ALTER SESSION SET events '10046 trace name context forever, level 12'; <<<<Replace this line with exactly the same query you used above. Force hard parse by modifying the case of a character>>>> select 'verify cursor closed' from dual; ALTER SYSTEM SET EVENTS '10046 trace name context off'; EXIT Make sure spooled file is formatted properly and that the 10046 trace has relevant explain plan in it.  Please Upload both files (10046 trace is generated in udump). Need instructions to find udump?   sqlplus "/ as sysdba" show parameters dump_dest This will show you bdump, cdump and udump locations.

    Read the article

  • More efficient in range checking

    - by Mob
    I am going to use a specific example in my question, but overall it is pretty general. I use java and libgdx. I have a ship that moves through space. In space there is debris that the ship can tractor beam in and and harvest. Debris is stored in a list, and the object contains it own x and y values. So currently there is no way to to find the debris's location without first looking at the debris object. Now at any given time there can be a huge (1000+) amount of debris in space, and I figure that calculating the distance between the ship and every single piece of debris and comparing it to maximum tractor beam length is rather inefficient. I have thought of dividing space into sectors, and have each sector contain a list of every object in it. This way I could only check nearby sectors. However this essentially doubles memory for the list. (I would reference the same object so it wouldn't double overall. I am not CS major, but I doubt this would be hugely significant.) This also means anytime an object moves it has to calculate which sector it is in, again not a huge problem. I also don't know if I can use some sort of 2D MAP that uses x and y values as keys. But since I am using float locations this sounds more trouble than its worth. I am kind of new to programming games, and I imagined there would be some eloquent solution to this issue.

    Read the article

  • Scripted SOA Diagnostic Dumps for PS6 (11.1.1.7)

    - by ShawnBailey
    When you upgrade to SOA Suite PS6 (11.1.1.7) you acquire a new set of Diagnostic Dumps in addition to what was available in PS5. With more than a dozen to choose from and not wanting to run them one at a time, this blog post provides a sample script to collect them all quickly and hopefully easily. There are several ways that this collection could be scripted and this is just one example. What is Included: wlst.properties: Ant Properties build.xml soa_diagnostic_script.py: Python Script What is Collected: 5 contextual thread dumps at 5 second intervals Diagnostic log entries from the server WLS Image which includes the domain configuration and WLS runtime data Most of the SOA Diagnostic Dumps including those for BPEL runtime, Adapters and composite information from MDS Instructions: Download the package and extract it to a location of your choosing Update the properties file 'wlst.properties' to match your environment Run 'ant' (must be on the path) Collect the zip package containing the files (by default it will be in the script.output location) Properties Reference: oracle_common.common.bin: Location of oracle_common/common/bin script.home: Location where you extracted the script and supporting files script.output: Location where you want the collections written username: User name for server connection pwd: Password to connect to the server url: T3 URL for server connection, '<host>:<port>' dump_interval: Interval in seconds between thread dumps log_interval: Duration in minutes that you want to go back for diagnostic log information Script Package

    Read the article

  • Printer Canon MP540 succesfully added finally but doesn't print? (Attached Debug log)

    - by NES
    i tried to setup my printer in Ubuntu 10.10. I had to use special guide to install because Canon used libcupsys2 in its packages and ubuntu expects libscupsys i followed this guide in reference to this advice. The first problem was this one (the ubuntu "add printer dialog asked for root credentials". The suggested workarout in Launchpad to create a root password worked. Now i added the printer. It's available for printing. Then i got an error message "cups insecure filter" which prevented me from printing. That could be solved by setting the need root rights in the /usr/lib/cups/filter/ directory. The error message disappeared after restarting cups service. Now it should work but it doesn't. The main problem is now, the printer seems to be proper setup but when i try to print a document, the printer icon appears for short time in gnomepanel. There's a printing job in Queue which got completed, but the printer doesn't print. I attached the Debuglog provided by printer error control, had to upload to another site, since it was to big in the question body here. Perhaps someone can identify the problem with it? Note: i know that it once worked fine with an older release of Ubuntu, but not sure which version this was.

    Read the article

  • How to dynamically override a method in an object

    - by Ace Takwas
    If this is possible, how can I change what a method does after I might have created an instance of that class and wish to keep the reference to that object but override a public method in it's class' definition? Here's my code: package time_applet; public class TimerGroup implements Runnable{ private Timer hour, min, sec; private Thread hourThread, minThread, secThread; public TimerGroup(){ hour = new HourTimer(); min = new MinuteTimer(); sec = new SecondTimer(); } public void run(){ hourThread.start(); minThread.start(); secThread.start(); } /*Please pay close attention to this method*/ private Timer activateHourTimer(int start_time){ hour = new HourTimer(start_time){ public void run(){ while (true){ if(min.changed)//min.getTime() == 0) changeTime(); } } }; hourThread = new Thread(hour); return hour; } private Timer activateMinuteTimer(int start_time){ min = new MinuteTimer(start_time){ public void run(){ while (true){ if(sec.changed)//sec.getTime() == 0) changeTime(); } } }; minThread = new Thread(min); return min; } private Timer activateSecondTimer(int start_time){ sec = new SecondTimer(start_time); secThread = new Thread(sec); return sec; } public Timer addTimer(Timer timer){ if (timer instanceof HourTimer){ hour = timer; return activateHourTimer(timer.getTime()); } else if (timer instanceof MinuteTimer){ min = timer; return activateMinuteTimer(timer.getTime()); } else{ sec = timer; return activateSecondTimer(timer.getTime()); } } } So for example in the method activateHourTimer(), I would like to override the run() method of the hour object without having to create a new object. How do I go about that?

    Read the article

  • How can I design my classes for a calendar based on database events?

    - by Gianluca78
    I'm developing a web calendar in php (using Symfony2) inspired by iCal for a project of mine. At this moment, I have two classes: a class "Calendar" and a class "CalendarCell". Here you are the two classes properties and method declarations. class Calendar { private $month; private $monthName; private $year; private $calendarCellList = array(); private $translator; public function __construct($month, $year, $translator) {} public function getCalendarCells() {} public function getMonth() {} public function getMonthName() {} public function getNextMonth() {} public function getNextYear() {} public function getPreviousMonth() {} public function getPreviousYear() {} public function getYear() {} private function calculateDaysPreviousMonth() {} private function calculateNumericDayOfTheFirstDayOfTheWeek() {} private function isCurrentDay(\DateTime $dateTime) {} private function isDifferentMonth(\DateTime $dateTime) {} } class CalendarCell { private $day; private $month; private $dayNameAbbreviation; private $numericDayOfTheWeek; private $isCurrentDay; private $isDifferentMonth; private $translator; public function __construct(array $parameters) {} public function getDay() {} public function getMonth() {} public function getDayNameAbbreviation() {} public function isCurrentDay() {} public function isDifferentMonth() {} } Each calendar day can includes many events stored in a database. My question is: which is the best way to manage these events in my classes? I think to add a eventList property in CalendarCell and populate it with an array of CalendarEvent objects fetched by the database. This kind of solution doesn't allow other coders to reuse the classes without db (because I should inject at least a repository services also) just to create and visualize a calendar... so maybe it could be better to extend CalendarCell (for instance in CalendarCellEvent) and add the database features? I feel like I'm missing some crucial design pattern! Any suggestion will be very appreciated!

    Read the article

< Previous Page | 390 391 392 393 394 395 396 397 398 399 400 401  | Next Page >