Search Results

Search found 74687 results on 2988 pages for 'work from home'.

Page 305/2988 | < Previous Page | 301 302 303 304 305 306 307 308 309 310 311 312  | Next Page >

  • How to copy files via terminal?

    - by Levan
    This might sound silly for some people but I'm new to Linux and don't know how to use it as good as other people, yes I rad about copying files with terminal but these examples will help me a lot. So here is what I want to do: Examples: I have a file in /home/levan/kdenlive untitelds.mpg and I want to copy this file to /media/sda3/SkyDrive and do not want to delete any thing in SkyDrive directory. I have a file in /media/sda3/SkyDrive untitelds.mpg and I want to copy this file to /home/levan/kdenlive and do not want to delete any thing in kdenlive directory I want to copy a folder from home directory to sda3 and do not want to delete any thing on sda3 directory and opposite I want to cut a folder/file and copy to other place without deleting files in that directory I cut it into.

    Read the article

  • How to automatically mount a Windows shared folder on every boot up?

    - by Zabba
    I am able to access Windows' shared folder from Ubuntu 10.10 Nautilus like so: Type into the Location Bar : smb://box/projects Now, I can see the folder in Nautilus, create/read files in it. Also, on desktop I get a folder called "projects on box". But, that folder on the desktop goes away when I reboot. So, I thought that I can automount the Windows' shared projects folder by adding this to my fstab: //box/Projects /home/base/Projects smbfs rw,user,username=jack,password=www222,fmask=666,dmask=777 0 0 (base is my user name on Ubuntu) Now, I get a folder called "Projects" in my home folder after boot up, but it is empty (cannot see the same files that I can see in Nautilus). What's am I doing wrong? Some more detail: This is what I see of the Projects folder when I do ls -l in my home folder: ... drwxr-xr-x 2 root root 4096 2011-01-01 10:22 Projects drwxr-xr-x 2 base base 4096 2011-01-01 09:06 Public ... Note the two "roots". Is that somehow the problem?

    Read the article

  • How to enable a symlink in this case

    - by Bragboy
    I cannot categorize this question under ubuntu since it has nothing to do with it. But I know people here can definitely answer this. I login to one of my deployment boxes using SSH (no ubuntu here). I am working in a tool called TeamCity which uses a folder called ".BuildServer" under home directory of the user. This folder may grow in size as the application runs but the current user is only given a limited amount of space. But the good thing this I got a folder access outside /home/deploy (deploy being the user here) folder. I now want to link this .BuildServer inside the /home/deploy directory to the other folder where I got permission for (meaning all the files should be re-routed to that directory) Hope my question was clear, please help.

    Read the article

  • Runescape - Ubuntu 12.04 - Jaggex cache folder

    - by user214179
    does anyone here know how to hide the jaggexcache folder, jagexappletviewer file and jagex_cl_runescape_LIVE file from the home folder? The problem is this, everytime you play runescape (in both windows or linux), it creates those files and folders. The problem is that on windows, you can hide them and everything is fine. On linux, if you add ( /.) to the folder or file to hide it, when you play runescape, those files and folders are again created, because they cannot be found. As far as I know, the game is Java based (needs java iced tea plug in and java 7), so is there any way of changing the directory where the game puts all those files? like home/documents instead of just /home thanks!

    Read the article

  • Why Gsettings isn't enough to reset my nautilus settings?

    - by berdario
    I'm trying to narrow down a bug (to eventually report it or give up if it'll turn out that resetting a simple setting will be enough to get rid of it) I noticed that with a brand new user the bug doesn't pop up so I tried to reset the config of my user's nautilus I renamed .config/nautilus I renamed .nautilus2 (strange name... why the 2?) and I did a gsettings reset-recursively org.gnome.nautilus strangely enough, it didn't work: I previously set up to use my home as the desktop folder, and now the key has correctly been resetted $gsettings get org.gnome.nautilus.preferences desktop-is-home-dir false and yet on my desktop I see the contents of my home folder (meaning that the settings have not been resetted) I know that with the transition to gtk3 there've been lots of changes (before I should've used gconf... but now there's dconf and gsettings), so there're quite a bit of question about nautilus... but these unfortunately now seem to be outdated

    Read the article

  • Unable to start GUI app from upstart

    - by novice
    As part of post-start of my app say "mydaemon" i want to launch a gui app say "mygui" I am unable to do this. I have verified user perm using xhost, DISPLAY variable is set correctly. conf file in /etc/init/ is given below me@ubuntu:~/term$ cat /etc/init/agentd.conf description "my daemon" author "me" start on runlevel [2345] stop on runlevel [016] console output kill timeout 60 respawn respawn limit 3 15 Allow some clean up time post-stop script env DISPLAY=:0.0 cd /home/me ./mygui sleep 1 end script script cd /home/me ./myapp end script post-start script env DISPLAY=:0.0 cd /home/me ./mygui end script sdn@ubuntu:~/term$ any suggestions?

    Read the article

  • How do I get a Tascam US-122L working?

    - by rich
    I have tascam US-122L since Windows is really crappy with this, I decided to try linux and ubuntu to be specific.. I tried searching different forums and sites on how to make my US-122L work, but most of the link and site that teaches how to make it work are really out dated.. and I cant get myself to trust a tutorial that is published 2009 and last edited early 2010.. does anyone know how to make my Tascam US122L work? if not just give a link on how to make it work.. atleast from someone who is using and made work on theirs.

    Read the article

  • gfortran in ubuntu 12.10

    - by user115334
    I hope my message will get read soon and somebody will give me a solution. I use fortran to do simulation and gfortran is the compiler I use. Recently I migrated from Ubuntu 10.10 to 12.10. After installing gfortran then I tried to compile and run my fortran programs then the problem started. I successfully compiled the program but I am unable to execute it. (I work in a directory in shared partition, not in HOME directory). When I compiled the program and run it within HOME directory, everything worked fine. On my Ubuntu 10.10, I was able to compile and execute fortran program from everywhere not only within HOME directory. This is what I do for compiling and executing fortran program: gfortran hello.f90 -o hello # to compile it ./hello # to execute it I'm blind about PATH or anything like it (this has to do with it, I suspect) so please give me direction.

    Read the article

  • Issues with time slicing

    - by user12331
    I was trying to see the effect of time slicing. And how it can consume significant amount of time. Actually, I was trying to divide a certain work into number of threads and see the effect. I have a two core processor. So two threads can run in parallel. I was trying to see if I have a work w that is done by 2 threads, and if I have the same work done by t threads with each thread doing w/t of the work. How much does time slicing play a role in it As time slicing is time consuming process, I was expecting that when I do the same work using a two thread process or by a t thread process, the amount of time taken by the t thread process will be more Any suggestions?

    Read the article

  • Why do I get this error trying to compile libxml2?

    - by bfaskiplar
    Although, I installed libxml2 once and reinstalled it for a few times. I cannot compile c-source code because compiler cannot find where the header file is. I am able to locate where it is (in the folder where I downloaded the tar.gz package) but I had a feeling in my guts that this package isn't installed correctly because when I tried to sudo make install, it says /bin/bash: /home/bfaskiplar/Downloads/tar.gz: No such file or directory make[2]: *** [install-libLTLIBRARIES] Error 127 make[2]: Leaving directory `/home/bfaskiplar/Downloads/tar.gz packages/libxml2-2.8.0' make[1]: *** [install-am] Error 2 make[1]: Leaving directory `/home/bfaskiplar/Downloads/tar.gz packages/libxml2-2.8.0' make: *** [install-recursive] Error 1 that's why I installed synaptic package manager and reinstalled it, but in this case, isn't it supposed to put header files in default directory where gcc normally searches in currently, I am able to compile it with -I option, but I wonder why do I have to copy headers manually even if I used synaptic for installation and why I am getting Error 1 and Error 2 when trying to install the package manually. thanks in advance

    Read the article

  • Finalement il y aura une application Firefox sur l'iPhone mais ce ne sera pas un navigateur : Mozill

    Finalement il y aura une application Firefox sur l'iPhone Mais ce ne sera pas un navigateur A la première lecture (trop) rapide du titre du billet du blog de de la Fondation Mozilla, on se dit que « ça y est !!! », Tritan Nitot nous a dit des cracks, il y aura bien un Firefox pour l'iPhone. En fait, il y aura bien un Firefox mais ce ne sera pas le navigateur. Il s'agira d'un service baptisé Firefox Home, d'où le titre du billet de Mozilla : « Firefox Home Coming Soon to the iPhone ». « Firefox Home [propose] sur les terminaux ou les plateformes où nous ne sommes pas en mesure de fournir un navigateur ?complet? (pour des raisons techni...

    Read the article

  • Exclude a sub directory in a protected directory

    - by user1351358
    I need to exclude protection on one of the folder inside a protected directory with .htaccess I put .htaccess in here: /home/mysite/public_html/new/administrator/.htaccess The directory need to be exclude from protection: /home/mysite/public_html/new/administrator/components/com_phocagallery/ My .htaccess file : AuthUserFile "/home/mysite/.htpasswds/public_html/new/administrator/passwd" AuthType Basic AuthName "admin" require valid-user SetEnvIf Request_URI "(/components/com_phocagallery/)$" allow Order allow,deny Allow from env=allow Satisfy any I tried but not working on my purpose. I suspect my path to the excluded directory may have some mistakes. Please advise me. Thanks.

    Read the article

  • System.getProperty("user.dir") cannot get my project root path ,but the path which my eclipse is located

    - by facebook-100005613813158
    As the title goes , I have class named GetException.java,inside it ,I read a xml file in a static code block like(Because this document is shared): static{ ... document = db.parse(new File(System.getProperty("user.dir")+"/src/exception/ExceptionCode.xml")); ... } To test if the file path is correct, I write a main function just inside GetException.java, it proves that the path is correct ,xml file can be read successfully. My project root dir is "/home/wuchang/workspace/MongodbI". But When this Class is loaded from other class,such as I called one of its static functions , it reports the error message: /home/mrs/??/eclipse/src/exception/ExceptionCode.xml (No such file or directory) /home/mrs/??/eclipse/ is actually my eclipse installation directory.So , I wander how System.getProperty("user.dir") returned the eclipse installation directory to me ,instead of my project root directory?

    Read the article

  • Samba login before requesting list of shares?

    - by user69359
    I'm relatively inexperienced at this and am starting to get a little frustrated: I have a Ubuntu Server with several user accounts. The Samba file server hosts a number of public shares + the home directories of each user as a private share. When connecting to the server using my OSX machine, I am prompted to enter my login data, and get a overview of all the public shares + my accounts home directory share. Exactly the way I want it. How ever, when I connect to the server from a Ubuntu machine, I am never prompted to enter any login data, and can only see the public folders. If a user wants to connect to their home directory on the server, they have to go through "connect to server" and mount their specific private share. Is there anyway to configure Samba or the Ubuntu client in a way that will make the user experience more similar to how it is on my OSX box? Thank you for your time!

    Read the article

  • I have a WinXP machine with 2 ethernet ports. One is connected to a LAN, another is connected to a WAN. How do I make this work?

    - by HappyEngineer
    I have a WinXP machine which has 2 ethernet ports. The information I've found indicates that the first nic in the advanced settings list is the one that receives all traffic. I'd like to configure them so that all traffic destined for a particular IP range goes to one nic and the rest goes to the other nic. Is that possible? If so, do I need additional software like zonealarm to shape the traffic?

    Read the article

  • When I remove the SSL port 443 from IIS my website stops loading, how can I have it just work with only port 80 and no SSL?

    - by shogun
    I am trying to disable SSL, I delete the 443 port so there is only an entry for port 80 and now the site won't load at all. If I re-add the 443 configuration it loads fine. What is causing it to require that? Why can't I set it up to run without SSL? Instead of just failing it should just load the page without HTTPS. If I disable port 443 and then I browse via HTTP, it STILL fails even though I am not trying to use HTTPS. What gives? When/where/how does it decide to use SSL?

    Read the article

  • Using a regex to determine domain using JavaScript

    - by jerome
    Hi All, If, as here at work, we have test, staging and production environments, such as: http://test.my-happy-work.com http://staging.my-happy-work.com http://www.my-happy-work.com I am writing some javascript that will redirect the browser to a url such as: http://[environment].my-happy-work.com/my-happy-video I need to be able to determine the current environment that we are in. There is the possibility that I will currently be at a url such as: http://[environment].my-happy-work.com/my-happy-path/my-happy-resource I want to be able to grab the window.location but strip it of everything but: http://[environment].my-happy-work.com And then append to that string + "/" + "my-happy-video". I am not skilled with regex, but I suppose there would be a way to parse the window.location up to the ".com" Thoughts? Thanks!

    Read the article

  • Understanding Thread/BeginInvoke? [beginner]

    - by Moberg
    Consider the code: class Work { public void DoStuff(string s) { Console.WriteLine(s); // .. whatever } } class Master { private readonly Work work = new Work(); public void Execute() { string hello = "hello"; // (1) is this an ugly hack ? var thread1 = new Thread(new ParameterizedThreadStart(o => this.work.DoStuff((string)o))); thread1.Start(hello); thread1.Join(); // (2) is this similar to the one above? new Action<string>(s => this.work.DoStuff(s)).BeginInvoke(hello, null, null); } } Is (1) an acceptable way of easy starting some work in a seperate thread? If not a better alternative would be much appreciated. Is (2) doing the same? I guess what I ask is if a new thread is started, or.. Hope you can help a beginner to a better understanding :) /Moberg

    Read the article

  • How to protect access to a url?

    - by ibiza
    I would need to create a php file that will do some work on my webserver and that will be called from a program on another server over the internet. Suppose the php file that will do the work is located at www.example.com/work.php What is the best way to protect unsollicited calls to the www.example.com/work.php? What I need is some mechanism so that when the intended program accesses the url (with some query string parameters), the work gets done, but if somebody type www.example.com/work.php in their browser, access will be denied and no work will be done. The way I've thought is to add some 'token' in the querystring that would be constructed by some algorithm from the calling program, a sample result could be to append to the url : ?key=randomKeyAtEachCall&token=SomeHexadecimalResultCalculatedFromTheKey and the key and token would be validated with a reverse algorithm on the php side. Is that safe, Are there any better idea?

    Read the article

  • Drupal subscriptions to taxonomy only

    - by Disco
    Playing around with the subscriptions module; i have some troubles getting it to send the right notification for right subscription. Here's the situation : I have a content-type of type 'work'; it has a cck-taxonomy field; when creating the content users choses one category in which his 'work' fulfills. In user profil, under Categories (user/3/subscriptions/taxa) I choose two categories, lets say 'house work' and 'car work'. When creating a new 'work' content I do not get the notification. But, when manually select 'content-type' in user's profile of type 'work' I get the notification e-mail but independant to which 'category' i had chosen. This is quite annoying since I only want the user to receive his notifications upon the taxonomy he has chosen, not for every new content of type 'work'. Am I missing something obvious here ?

    Read the article

  • Ajax Control Toolkit and Superexpert

    - by Stephen Walther
    Microsoft has asked my company, Superexpert Consulting, to take ownership of the development and maintenance of the Ajax Control Toolkit moving forward. In this blog entry, I discuss our strategy for improving the Ajax Control Toolkit. Why the Ajax Control Toolkit? The Ajax Control Toolkit is one of the most popular projects on CodePlex. In fact, some have argued that it is among the most successful open-source projects of all time. It consistently receives over 3,500 downloads a day (not weekends -- workdays). A mind-boggling number of developers use the Ajax Control Toolkit in their ASP.NET Web Forms applications. Why does the Ajax Control Toolkit continue to be such a popular project? The Ajax Control Toolkit fills a strong need in the ASP.NET Web Forms world. The Toolkit enables Web Forms developers to build richly interactive JavaScript applications without writing any JavaScript. For example, by taking advantage of the Ajax Control Toolkit, a Web Forms developer can add modal dialogs, popup calendars, and client tabs to a web application simply by dragging web controls onto a page. The Ajax Control Toolkit is not for everyone. If you are comfortable writing JavaScript then I recommend that you investigate using jQuery plugins instead of the Ajax Control Toolkit. However, if you are a Web Forms developer and you don’t want to get your hands dirty writing JavaScript, then the Ajax Control Toolkit is a great solution. The Ajax Control Toolkit is Vast The Ajax Control Toolkit consists of 40 controls. That’s a lot of controls (For the sake of comparison, jQuery UI consists of only 8 controls – those slackers J). Furthermore, developers expect the Ajax Control Toolkit to work on browsers both old and new. For example, people expect the Ajax Control Toolkit to work with Internet Explorer 6 and Internet Explorer 9 and every version of Internet Explorer in between. People also expect the Ajax Control Toolkit to work on the latest versions of Mozilla Firefox, Apple Safari, and Google Chrome. And, people expect the Ajax Control Toolkit to work with different operating systems. Yikes, that is a lot of combinations. The biggest challenge which my company faces in supporting the Ajax Control Toolkit is ensuring that the Ajax Control Toolkit works across all of these different browsers and operating systems. Testing, Testing, Testing Because we wanted to ensure that we could easily test the Ajax Control Toolkit with different browsers, the very first thing that we did was to set up a dedicated testing server. The dedicated server -- named Schizo -- hosts 4 virtual machines so that we can run Internet Explorer 6, Internet Explorer 7, Internet Explorer 8, and Internet Explorer 9 at the same time (We also use the virtual machines to host the latest versions of Firefox, Chrome, Opera, and Safari). The five developers on our team (plus me) can each publish to a separate FTP website on the testing server. That way, we can quickly test how changes to the Ajax Control Toolkit affect different browsers. QUnit Tests for the Ajax Control Toolkit Introducing regressions – introducing new bugs when trying to fix existing bugs – is the concern which prevents me from sleeping well at night. There are so many people using the Ajax Control Toolkit in so many unique scenarios, that it is difficult to make improvements to the Ajax Control Toolkit without introducing regressions. In order to avoid regressions, we decided early on that it was extremely important to build good test coverage for the 40 controls in the Ajax Control Toolkit. We’ve been focusing a lot of energy on building automated JavaScript unit tests which we can use to help us discover regressions. We decided to write the unit tests with the QUnit test framework. We picked QUnit because it is quickly becoming the standard unit testing framework in the JavaScript world. For example, it is the unit testing framework used by the jQuery team, the jQuery UI team, and many jQuery UI plugin developers. We had to make several enhancements to the QUnit framework in order to test the Ajax Control Toolkit. For example, QUnit does not support tests which include postbacks. We modified the QUnit framework so that it works with IFrames so we could perform postbacks in our automated tests. At this point, we have written hundreds of QUnit tests. For example, we have written 135 QUnit tests for the Accordion control. The QUnit tests are included with the Ajax Control Toolkit source code in a project named AjaxControlToolkit.Tests. You can run all of the QUnit tests contained in the project by opening the Default.aspx page. Automating the QUnit Tests across Multiple Browsers Automated tests are useless if no one ever runs them. In order for the QUnit tests to be useful, we needed an easy way to run the tests automatically against a matrix of browsers. We wanted to run the unit tests against Internet Explorer 6, Internet Explorer 7, Internet Explorer 8, Internet Explorer 9, Firefox, Chrome, and Safari automatically. Expecting a developer to run QUnit tests against every browser after every check-in is just too much to expect. It takes 20 seconds to run the Accordion QUnit tests. We are testing against 8 browsers. That would require the developer to open 8 browsers and wait for the results after each change in code. Too much work. Therefore, we built a JavaScript Test Server. Our JavaScript Test Server project was inspired by John Resig’s TestSwarm project. The JavaScript Test Server runs our QUnit tests in a swarm of browsers (running on different operating systems) automatically. Here’s how the JavaScript Test Server works: 1. We created an ASP.NET page named RunTest.aspx that constantly polls the JavaScript Test Server for a new set of QUnit tests to run. After the RunTest.aspx page runs the QUnit tests, the RunTest.aspx records the test results back to the JavaScript Test Server. 2. We opened the RunTest.aspx page on instances of Internet Explorer 6, Internet Explorer 7, Internet Explorer 8, Internet Explorer 9, FireFox, Chrome, Opera, Google, and Safari. Now that we have the JavaScript Test Server setup, we can run all of our QUnit tests against all of the browsers which we need to support with a single click of a button. A New Release of the Ajax Control Toolkit Each Month The Ajax Control Toolkit Issue Tracker contains over one thousand five hundred open issues and feature requests. So we have plenty of work on our plates J At CodePlex, anyone can vote for an issue to be fixed. Originally, we planned to fix issues in order of their votes. However, we quickly discovered that this approach was inefficient. Constantly switching back and forth between different controls was too time-consuming. It takes time to re-familiarize yourself with a control. Instead, we decided to focus on two or three controls each month and really focus on fixing the issues with those controls. This way, we can fix sets of related issues and avoid the randomization caused by context switching. Our team works in monthly sprints. We plan to do another release of the Ajax Control Toolkit each and every month. So far, we have competed one release of the Ajax Control Toolkit which was released on April 1, 2011. We plan to release a new version in early May. Conclusion Fortunately, I work with a team of smart developers. We currently have 5 developers working on the Ajax Control Toolkit (not full-time, they are also building two very cool ASP.NET MVC applications). All the developers who work on our team are required to have strong JavaScript, jQuery, and ASP.NET MVC skills. In the interest of being as transparent as possible about our work on the Ajax Control Toolkit, I plan to blog frequently about our team’s ongoing work. In my next blog entry, I plan to write about the two Ajax Control Toolkit controls which are the focus of our work for next release.

    Read the article

  • Overview of SOA Diagnostics in 11.1.1.6

    - by ShawnBailey
    What tools are available for diagnosing SOA Suite issues? There are a variety of tools available to help you and Support diagnose SOA Suite issues in 11g but it can be confusing as to which tool is appropriate for a particular situation and what their relationships are. This blog post will introduce the various tools and attempt to clarify what each is for and how they are related. Let's first list the tools we'll be addressing: RDA: Remote Diagnostic Agent DFW: Diagnostic Framework Selective Tracing DMS: Dynamic Monitoring Service ODL: Oracle Diagnostic Logging ADR: Automatic Diagnostics Repository ADRCI: Automatic Diagnostics Repository Command Interpreter WLDF: WebLogic Diagnostic Framework This overview is not mean to be a comprehensive guide on using all of these tools, however, extensive reference materials are included that will provide many more details on their execution. Another point to note is that all of these tools are applicable for Fusion Middleware as a whole but specific products may or may not have implemented features to leverage them. A couple of the tools have a WebLogic Scripting Tool or 'WLST' interface. WLST is a command interface for executing pre-built functions and custom scripts against a domain. A detailed WLST tutorial is beyond the scope of this post but you can find general information here. There are more specific resources in the below sections. In this post when we refer to 'Enterprise Manager' or 'EM' we are referring to Enterprise Manager Fusion Middleware Control. RDA (Remote Diagnostic Agent) RDA is a standalone tool that is used to collect both static configuration and dynamic runtime information from the SOA environment. RDA is generally run manually from the command line against a domain or single server. When opening a new Service Request, including an RDA collection can dramatically decrease the back and forth required to collect logs and configuration information for Support. After installing RDA you configure it to use the SOA Suite module as decribed in the referenced resources. The SOA module includes the Oracle WebLogic Server (WLS) module by default in order to include all of the relevant information for the environment. In addition to this basic configuration there is also an advanced mode where you can set the number of thread dumps for the collections, log files, Incidents, etc. When would you use it? When creating a Service Request or otherwise working with Oracle resources on an issue, capturing environment snapshots to baseline your configuration or to diagnose an issue on your own. How is it related to the other tools? RDA is related to DFW in that it collects the last 10 Incidents from the server by default. In a similar manner, RDA is related to ODL through its collection of the diagnostic logs and these may contain information from Selective Tracing sessions. Examples of what it currently collects: (for details please see the links in the Resources section) Diagnostic Logs (ODL) Diagnostic Framework Incidents (DFW) SOA MDS Deployment Descriptors SOA Repository Summary Statistics Thread Dumps Complete Domain Configuration RDA Resources: Webcast Recording: Using RDA with Oracle SOA Suite 11g Blog Post: Diagnose SOA Suite 11g Issues Using RDA Download RDA How to Collect Analysis Information Using RDA for Oracle SOA Suite 11g Products [ID 1350313.1] How to Collect Analysis Information Using RDA for Oracle SOA Suite and BPEL Process Manager 11g [ID 1352181.1] Getting Started With Remote Diagnostic Agent: Case Study - Oracle WebLogic Server (Video) [ID 1262157.1] top DFW (Diagnostic Framework) DFW provides the ability to collect specific information for a particular problem when that problem occurs. DFW is included with your SOA Suite installation and deployed to the domain. Let's define the components of DFW. Diagnostic Dumps: Specific diagnostic collections that are defined at either the 'system' or product level. Examples would be diagnostic logs or thread dumps. Incident: A collection of Diagnostic Dumps associated with a particular problem Log Conditions: An Oracle Diagnostic Logging event that DFW is configured to listen for. If the event is identified then an Incident will be created. WLDF Watch: The WebLogic Diagnostic Framework or 'WLDF' is not a component of DFW, however, it can be a source of DFW Incident creation through the use of a 'Watch'. WLDF Notification: A Notification is a component of WLDF and is the link between the Watch and DFW. You can configure multiple Notification types in WLDF and associate them with your Watches. 'FMWDFW-notification' is available to you out of the box to allow for DFW notification of Watch execution. Rule: Defines a WLDF Watch or Log Condition for which we want to associate a set of Diagnostic Dumps. When triggered the specified dumps will be collected and added to the Incident Rule Action: Defines the specific Diagnostic Dumps to collect for a particular rule ADR: Automatic Diagnostics Repository; Defined for every server in a domain. This is where Incidents are stored Now let's walk through a simple flow: Oracle Web Services error message OWS-04086 (SOAP Fault) is generated on managed server 1 DFW Log Condition for OWS-04086 evaluates to TRUE DFW creates a new Incident in the ADR for managed server 1 DFW executes the specified Diagnostic Dumps and adds the output to the Incident In this case we'll grab the diagnostic log and thread dump. We might also want to collect the WSDL binding information and SOA audit trail When would you use it? When you want to automatically collect Diagnostic Dumps at a particular time using a trigger or when you want to manually collect the information. In either case it can be readily uploaded to Oracle Support through the Service Request. How is it related to the other tools? DFW generates Incidents which are collections of Diagnostic Dumps. One of the system level Diagonstic Dumps collects the current server diagnostic log which is generated by ODL and can contain information from Selective Tracing sessions. Incidents are included in RDA collections by default and ADRCI is a tool that is used to package an Incident for upload to Oracle Support. In addition, both ODL and DMS can be used to trigger Incident creation through DFW. The conditions and rules for generating Incidents can become quite complicated and the below resources go into more detail. A simpler approach to leveraging at least the Diagnostic Dumps is through WLST (WebLogic Scripting Tool) where there are commands to do the following: Create an Incident Execute a single Diagnostic Dump Describe a Diagnostic Dump List the available Diagnostic Dumps The WLST option offers greater control in what is generated and when. It can be a great help when collecting information for Support. There are overlaps with RDA, however, DFW is geared towards collecting specific runtime information when an issue occurs while existing Incidents are collected by RDA. There are 3 WLDF Watches configured by default in a SOA Suite 11g domain: Stuck Threads, Unchecked Exception and Deadlock. These Watches are enabled by default and will generate Incidents in ADR. They are configured to reset automatically after 30 seconds so they have the potential to create multiple Incidents if these conditions are consistent. The Incidents generated by these Watches will only contain System level Diagnostic Dumps. These same System level Diagnostic Dumps will be included in any application scoped Incident as well. Starting in 11.1.1.6, SOA Suite is including its own set of application scoped Diagnostic Dumps that can be executed from WLST or through a WLDF Watch or Log Condition. These Diagnostic Dumps can be added to an Incident such as in the earlier example using the error code OWS-04086. soa.config: MDS configuration files and deployed-composites.xml soa.composite: All artifacts related to the deployed composite soa.wsdl: Summary of endpoints configured for the composite soa.edn: EDN configuration summary if applicable soa.db: Summary DB information for the SOA repository soa.env: Coherence cluster configuration summary soa.composite.trail: Partial audit trail information for the running composite The current release of RDA has the option to collect the soa.wsdl and soa.composite Diagnostic Dumps. More Diagnostic Dumps for SOA Suite products are planned for future releases along with enhancements to DFW itself. DFW Resources: Webcast Recording: SOA Diagnostics Sessions: Diagnostic Framework Diagnostic Framework Documentation DFW WLST Command Reference Documentation for SOA Diagnostic Dumps in 11.1.1.6 top Selective Tracing Selective Tracing is a facility available starting in version 11.1.1.4 that allows you to increase the logging level for specific loggers and for a specific context. What this means is that you have greater capability to collect needed diagnostic log information in a production environment with reduced overhead. For example, a Selective Tracing session can be executed that only increases the log level for one composite, only one logger, limited to one server in the cluster and for a preset period of time. In an environment where dozens of composites are deployed this can dramatically reduce the volume and overhead of the logging without sacrificing relevance. Selective Tracing can be administered either from Enterprise Manager or through WLST. WLST provides a bit more flexibility in terms of exactly where the tracing is run. When would you use it? When there is an issue in production or another environment that lends itself to filtering by an available context criteria and increasing the log level globally results in too much overhead or irrelevant information. The information is written to the server diagnostic log and is exportable from Enterprise Manager How is it related to the other tools? Selective Tracing output is written to the server diagnostic log. This log can be collected by a system level Diagnostic Dump using DFW or through a default RDA collection. Selective Tracing also heavily leverages ODL fields to determine what to trace and to tag information that is part of a particular tracing session. Available Context Criteria: Application Name Client Address Client Host Composite Name User Name Web Service Name Web Service Port Selective Tracing Resources: Webcast Recording: SOA Diagnostics Session: Using Selective Tracing to Diagnose SOA Suite Issues How to Use Selective Tracing for SOA [ID 1367174.1] Selective Tracing WLST Reference top DMS (Dynamic Monitoring Service) DMS exposes runtime information for monitoring. This information can be monitored in two ways: Through the DMS servlet As exposed MBeans The servlet is deployed by default and can be accessed through http://<host>:<port>/dms/Spy (use administrative credentials to access). The landing page of the servlet shows identical columns of what are known as Noun Types. If you select a Noun Type you will see a table in the right frame that shows the attributes (Sensors) for the Noun Type and the available instances. SOA Suite has several exposed Noun Types that are available for viewing through the Spy servlet. Screenshots of the Spy servlet are available in the Knowledge Base article How to Monitor Runtime SOA Performance With the Dynamic Monitoring Service (DMS). Every Noun instance in the runtime is exposed as an MBean instance. As such they are generally available through an MBean browser and available for monitoring through WLDF. You can configure a WLDF Watch to monitor a particular attribute and fire a notification when the threshold is exceeded. A WLDF Watch can use the out of the box DFW notification type to notify DFW to create an Incident. When would you use it? When you want to monitor a metric or set of metrics either manually or through an automated system. When you want to trigger a WLDF Watch based on a metric exposed through DMS. How is it related to the other tools? DMS metrics can be monitored with WLDF Watches which can in turn notify DFW to create an Incident. DMS Resources: How to Monitor Runtime SOA Performance With the Dynamic Monitoring Service (DMS) [ID 1368291.1] How to Reset a SOA 11g DMS Metric DMS Documentation top ODL (Oracle Diagnostic Logging) ODL is the primary facility for most Fusion Middleware applications to log what they are doing. Whenever you change a logging level through Enterprise Manager it is ultimately exposed through ODL and written to the server diagnostic log. A notable exception to this is WebLogic Server which uses its own log format / file. ODL logs entries in a consistent, structured way using predefined fields and name/value pairs. Here's an example of a SOA Suite entry: [2012-04-25T12:49:28.083-06:00] [AdminServer] [ERROR] [] [oracle.soa.bpel.engine] [tid: [ACTIVE].ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'] [userId: ] [ecid: 0963fdde7e77631c:-31a6431d:136eaa46cda:-8000-00000000000000b4,0] [errid: 41] [WEBSERVICE_PORT.name: BPELProcess2_pt] [APP: soa-infra] [composite_name: TestProject2] [J2EE_MODULE.name: fabric] [WEBSERVICE.name: bpelprocess1_client_ep] [J2EE_APP.name: soa-infra] Error occured while handling a post operation[[ When would you use it? You'll use ODL almost every time you want to identify and diagnose a problem in the environment. The entries are written to the server diagnostic log. How is it related to the other tools? The server diagnostic logs are collected by DFW and RDA. Selective Tracing writes its information to the diagnostic log as well. Additionally, DFW log conditions are triggered by ODL log events. ODL Resources: ODL Documentation top ADR (Automatic Diagnostics Repository) ADR is not a tool in and of itself but is where DFW stores the Incidents it creates. Every server in the domain has an ADR location which can be found under <SERVER_HOME>/adr. This is referred to the as the ADR 'Base' location. ADR also has what are known as 'Home' locations. Example: You have a domain called 'myDomain' and an associated managed server called 'myServer'. Your admin server is called 'AdminServer'. Your domain home directory is called 'myDomain' and it contains a 'servers' directory. The 'servers' directory contains a directory for the managed server called 'myServer' and here is where you'll find the 'adr' directory which is the ADR 'Base' location for myServer. To get to the ADR 'Home' locations we drill through a few levels: diag/ofm/myDomain/ In an 11.1.1.6 SOA Suite domain you will see 2 directories here, 'myServer' and 'soa-infra'. These are the ADR 'Home' locations. 'myServer' is the 'system' ADR home and contains system level Incidents. 'soa-infra' is the name that SOA Suite used to register with DFW and this ADR home contains SOA Suite related Incidents Each ADR home location contains a series of directories, one of which is called 'incident'. This is where your Incidents are stored. When would you use it? It's a good idea to check on these locations from time to time to see whether a lot of Incidents are being generated. They can be cleaned out by deleting the Incident directories or through the ADRCI tool. If you know that an Incident is of particular interest for an issue you're working with Oracle you can simply zip it up and provide it. How does it relate to the other tools? ADR is obviously very important for DFW since it's where the Incidents are stored. Incidents contain Diagnostic Dumps that may relate to diagnostic logs (ODL) and DMS metrics. The most recent 10 Incident directories are collected by RDA by default and ADRCI relies on the ADR locations to help manage the contents. top ADRCI (Automatic Diagnostics Repository Command Interpreter) ADRCI is a command line tool for packaging and managing Incidents. When would you use it? When purging Incidents from an ADR Home location or when you want to package an Incident along with an offline RDA collection for upload to Oracle Support. How does it relate to the other tools? ADRCI contains a tool called the Incident Packaging System or IPS. This is used to package an Incident for upload to Oracle Support through a Service Request. Starting in 11.1.1.6 IPS will attempt to collect an offline RDA collection and include it with the Incident package. This will only work if Perl is available on the path, otherwise it will give a warning and package only the Incident files. ADRCI Resources: How to Use the Incident Packaging System (IPS) in SOA 11g [ID 1381259.1] ADRCI Documentation top WLDF (WebLogic Diagnostic Framework) WLDF is functionality available in WebLogic Server since version 9. Starting with FMw 11g a link has been added between WLDF and the pre-existing DFW, the WLDF Watch Notification. Let's take a closer look at the flow: There is a need to monitor the performance of your SOA Suite message processing A WLDF Watch is created in the WLS console that will trigger if the average message processing time exceeds 2 seconds. This metric is monitored through a DMS MBean instance. The out of the box DFW Notification (the Notification is called FMWDFW-notification) is added to the Watch. Under the covers this notification is of type JMX. The Watch is triggered when the threshold is exceeded and fires the Notification. DFW has a listener that picks up the Notification and evaluates it according to its rules, etc When it comes to automatic Incident creation, WLDF is a key component with capabilities that will grow over time. When would you use it? When you want to monitor the WLS server log or an MBean metric for some condition and fire a notification when the Watch is triggered. How does it relate to the other tools? WLDF is used to automatically trigger Incident creation through DFW using the DFW Notification. WLDF Resources: How to Monitor Runtime SOA Performance With the Dynamic Monitoring Service (DMS) [ID 1368291.1] How To Script the Creation of a SOA WLDF Watch in 11g [ID 1377986.1] WLDF Documentation top

    Read the article

  • Microphone not working in Windows Virtual PC (on Windows 7)

    - by Clay Nichols
    I"m using Windows Virtual PC on Windows 7 (host) running Windows XP (as the Guest O/S) I'm trying to get the Microphone working. When I Enable Integration Features: Microphone does not work When I run the Sound Recorder, the record button is disabled. If I look at Sound settings, there are no options for the Mic (it's all disabled "grayed out"). Speakers work Copy & Paste works When I Disable Integration Features: Microphone and speakers work Copy and Paste does not (as expected) Drag'n Drop copying does not work in either situation. What I've Tried Verified that the Windows XP Mode Virtual PC guest also has the same symptoms (Mic doesn't work) and audio out (speakers) do work. I"m going to try (but have little hope) to: -Uninstall and Reinstall the Integration addin for Virtual PC

    Read the article

  • Open Source Scheduling Software?

    - by Kaiser Advisor
    Hi Everyone, I'm looking for scheduling software to schedule 25 people over 8 work sites. Most are FT and can work up to 40 hours a week, but some are part-time and can only work certain days of the week and up to a certain number of hours a week. There are 3 classes of employees: Managers, Supervisors, and Workers. They should be shuffled so that they spend approximately equal time at each of the 8 work sites and with all classes of employees; i.e., Joe the worker should spend about 1 out of 8 days on each work site, and work with managers, supervisors, and other workers equally. I tried to do this in excel with the solver, but the shuffling requirement makes it way too complicated, so I'm stuck trying to do big parts of this manually with the solver helping out with just the hour provisioning piece. Is there any open source software that could help me? Much appreciated! KA

    Read the article

  • Mapping UrlEncoded POST Values in ASP.NET Web API

    - by Rick Strahl
    If there's one thing that's a bit unexpected in ASP.NET Web API, it's the limited support for mapping url encoded POST data values to simple parameters of ApiController methods. When I first looked at this I thought I was doing something wrong, because it seems mighty odd that you can bind query string values to parameters by name, but can't bind POST values to parameters in the same way. To demonstrate here's a simple example. If you have a Web API method like this:[HttpGet] public HttpResponseMessage Authenticate(string username, string password) { …} and then hit with a URL like this: http://localhost:88/samples/authenticate?Username=ricks&Password=sekrit it works just fine. The query string values are mapped to the username and password parameters of our API method. But if you now change the method to work with [HttpPost] instead like this:[HttpPost] public HttpResponseMessage Authenticate(string username, string password) { …} and hit it with a POST HTTP Request like this: POST http://localhost:88/samples/authenticate HTTP/1.1 Host: localhost:88 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Content-type: application/x-www-form-urlencoded Content-Length: 30 Username=ricks&Password=sekrit you'll find that while the request works, it doesn't actually receive the two string parameters. The username and password parameters are null and so the method is definitely going to fail. When I mentioned this over Twitter a few days ago I got a lot of responses back of why I'd want to do this in the first place - after all HTML Form submissions are the domain of MVC and not WebAPI which is a valid point. However, the more common use case is using POST Variables with AJAX calls. The following is quite common for passing simple values:$.post(url,{ Username: "Rick", Password: "sekrit" },function(result) {…}); but alas that doesn't work. How ASP.NET Web API handles Content Bodies Web API supports parsing content data in a variety of ways, but it does not deal with multiple posted content values. In effect you can only post a single content value to a Web API Action method. That one parameter can be very complex and you can bind it in a variety of ways, but ultimately you're tied to a single POST content value in your parameter definition. While it's possible to support multiple parameters on a POST/PUT operation, only one parameter can be mapped to the actual content - the rest have to be mapped to route values or the query string. Web API treats the whole request body as one big chunk of data that is sent to a Media Type Formatter that's responsible for de-serializing the content into whatever value the method requires. The restriction comes from async nature of Web API where the request data is read only once inside of the formatter that retrieves and deserializes it. Because it's read once, checking for content (like individual POST variables) first is not possible. However, Web API does provide a couple of ways to access the form POST data: Model Binding - object property mapping to bind POST values FormDataCollection - collection of POST keys/values ModelBinding POST Values - Binding POST data to Object Properties The recommended way to handle POST values in Web API is to use Model Binding, which maps individual urlencoded POST values to properties of a model object provided as the parameter. Model binding requires a single object as input to be bound to the POST data, with each POST key that matches a property name (including nested properties like Address.Street) being mapped and updated including automatic type conversion of simple types. This is a very nice feature - and a familiar one from MVC - that makes it very easy to have model objects mapped directly from inbound data. The obvious drawback with Model Binding is that you need a model for it to work: You have to provide a strongly typed object that can receive the data and this object has to map the inbound data. To rewrite the example above to use ModelBinding I have to create a class maps the properties that I need as parameters:public class LoginData { public string Username { get; set; } public string Password { get; set; } } and then accept the data like this in the API method:[HttpPost] public HttpResponseMessage Authenticate(LoginData login) { string username = login.Username; string password = login.Password; … } This works fine mapping the POST values to the properties of the login object. As a side benefit of this method definition, the method now also allows posting of JSON or XML to the same endpoint. If I change my request to send JSON like this: POST http://localhost:88/samples/authenticate HTTP/1.1 Host: localhost:88 Accept: application/jsonContent-type: application/json Content-Length: 40 {"Username":"ricks","Password":"sekrit"} it works as well and transparently, courtesy of the nice Content Negotiation features of Web API. There's nothing wrong with using Model binding and in fact it's a common practice to use (view) model object for inputs coming back from the client and mapping them into these models. But it can be  kind of a hassle if you have AJAX applications with a ton of backend hits, especially if many methods are very atomic and focused and don't effectively require a model or view. Not always do you have to pass structured data, but sometimes there are just a couple of simple response values that need to be sent back. If all you need is to pass a couple operational parameters, creating a view model object just for parameter purposes seems like overkill. Maybe you can use the query string instead (if that makes sense), but if you can't then you can often end up with a plethora of 'message objects' that serve no further  purpose than to make Model Binding work. Note that you can accept multiple parameters with ModelBinding so the following would still work:[HttpPost] public HttpResponseMessage Authenticate(LoginData login, string loginDomain) but only the object will be bound to POST data. As long as loginDomain comes from the querystring or route data this will work. Collecting POST values with FormDataCollection Another more dynamic approach to handle POST values is to collect POST data into a FormDataCollection. FormDataCollection is a very basic key/value collection (like FormCollection in MVC and Request.Form in ASP.NET in general) and then read the values out individually by querying each. [HttpPost] public HttpResponseMessage Authenticate(FormDataCollection form) { var username = form.Get("Username"); var password = form.Get("Password"); …} The downside to this approach is that it's not strongly typed, you have to handle type conversions on non-string parameters, and it gets a bit more complicated to test such as setup as you have to seed a FormDataCollection with data. On the other hand it's flexible and easy to use and especially with string parameters is easy to deal with. It's also dynamic, so if the client sends you a variety of combinations of values on which you make operating decisions, this is much easier to work with than a strongly typed object that would have to account for all possible values up front. The downside is that the code looks old school and isn't as self-documenting as a parameter list or object parameter would be. Nevertheless it's totally functionality and a viable choice for collecting POST values. What about [FromBody]? Web API also has a [FromBody] attribute that can be assigned to parameters. If you have multiple parameters on a Web API method signature you can use [FromBody] to specify which one will be parsed from the POST content. Unfortunately it's not terribly useful as it only returns content in raw format and requires a totally non-standard format ("=content") to specify your content. For more info in how FromBody works and several related issues to how POST data is mapped, you can check out Mike Stalls post: How WebAPI does Parameter Binding Not really sure where the Web API team thought [FromBody] would really be a good fit other than a down and dirty way to send a full string buffer. Extending Web API to make multiple POST Vars work? Don't think so Clearly there's no native support for multiple POST variables being mapped to parameters, which is a bit of a bummer. I know in my own work on one project my customer actually found this to be a real sticking point in their AJAX backend work, and we ended up not using Web API and using MVC JSON features instead. That's kind of sad because Web API is supposed to be the proper solution for AJAX backends. With all of ASP.NET Web API's extensibility you'd think there would be some way to build this functionality on our own, but after spending a bit of time digging and asking some of the experts from the team and Web API community I didn't hear anything that even suggests that this is possible. From what I could find I'd say it's not possible primarily because Web API's Routing engine does not account for the POST variable mapping. This means [HttpPost] methods with url encoded POST buffers are not mapped to the parameters of the endpoint, and so the routes would never even trigger a request that could be intercepted. Once the routing doesn't work there's not much that can be done. If somebody has an idea how this could be accomplished I would love to hear about it. Do we really need multi-value POST mapping? I think that that POST value mapping is a feature that one would expect of any API tool to have. If you look at common APIs out there like Flicker and Google Maps etc. they all work with POST data. POST data is very prominent much more so than JSON inputs and so supporting as many options that enable would seem to be crucial. All that aside, Web API does provide very nice features with Model Binding that allows you to capture many POST variables easily enough, and logistically this will let you build whatever you need with POST data of all shapes as long as you map objects. But having to have an object for every operation that receives a data input is going to take its toll in heavy AJAX applications, with a lot of types created that do nothing more than act as parameter containers. I also think that POST variable mapping is an expected behavior and Web APIs non-support will likely result in many, many questions like this one: How do I bind a simple POST value in ASP.NET WebAPI RC? with no clear answer to this question. I hope for V.next of WebAPI Microsoft will consider this a feature that's worth adding. Related Articles Passing multiple POST parameters to Web API Controller Methods Mike Stall's post: How Web API does Parameter Binding Where does ASP.NET Web API Fit?© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

< Previous Page | 301 302 303 304 305 306 307 308 309 310 311 312  | Next Page >