Search Results

Search found 7618 results on 305 pages for 'backup exec'.

Page 192/305 | < Previous Page | 188 189 190 191 192 193 194 195 196 197 198 199  | Next Page >

  • parse more items

    - by user449891
    Currently I'm using zRSSFeed to parse a Menalto Gallery2 RSS feed, and only get about 5 details: link, description, title, etc. There are about 11 items within the tag. How can I get zRSSFeed to return all of them, including <media:thumbnail url="http..."> which includes a colon? Code from ZRSSFeed var html='';var row='odd';var xml=getXMLDocument(data.xmlString);var xmlEntries=xml.getElementsByTagName('item'); //if(options.header)html+='<div class="rssHeader">'+'<a href="'+feeds.link+'" title="'+feeds.description+'">'+feeds.title+'</a>'+'</div>'; //html+='<div class="rssBody">'+'<ul>';for(var i=0;i<feeds.entries.length;i++){ html+='<div class="rssBody">';for(var i=0;i<feeds.entries.length;i++){ var entry=feeds.entries[i];var entryDate=new Date(entry.publishedDate);var pubDate=entryDate.toLocaleDateString()+' '+entryDate.toLocaleTimeString(); //html+='<li class="rssRow '+row+'">' html+='<div>' //if(options.date)html+='<div>'+pubDate+'</div>' if(options.content){ //if(options.snippet&&entry.contentSnippet!=''){ //var content=entry.contentSnippet; //}else{ var content=entry.content; sq_arr = content.split('>'); sq_brr = sq_arr[0].split('?'); sq_crr = sq_arr[1].split(' width'); sq_drr = sq_crr[0].split('src'); sq_b = new RegExp(/\d+(?=\")/g).exec(sq_drr[1]); sq_c = sq_b*1-1; sq_rplc = sq_brr[1].replace(/\d+(?=\")/g, sq_c); sq_str = sq_brr[0] + '?g2_view=core.DownloadItem&' + sq_rplc + '>' + sq_crr[0] +'" height="75" width="75"></a>'; content = sq_str.r`enter code here`eplace(/&amp;/g, '&'); //} //html+='<p>'+content+'</p>' html+=content //html+='<'+options.titletag+'><a href="'+entry.link+'" title="View this feed at '+feeds.title+'" target="'+options.linktarget+'">'+entry.title+'</a></'+options.titletag+'>' } (A more human readable version -- cwallenpoole) var html=''; var row='odd'; var xml=getXMLDocument(data.xmlString); var xmlEntries=xml.getElementsByTagName('item'); html+='<div class="rssBody">'; for(var i=0;i<feeds.entries.length;i++){ var entry=feeds.entries[i]; var entryDate=new Date(entry.publishedDate); var pubDate=entryDate.toLocaleDateString()+' '+entryDate.toLocaleTimeString(); html+='<div>' if(options.content){ var content=entry.content; sq_arr = content.split('>'); sq_brr = sq_arr[0].split('?'); sq_crr = sq_arr[1].split(' width'); sq_drr = sq_crr[0].split('src'); sq_b = new RegExp(/\d+(?=\")/g).exec(sq_drr[1]); sq_c = sq_b-1; sq_rplc = sq_brr[1].replace(/\d+(?=\")/g, sq_c); sq_str = sq_brr[0] + '?g2_view=core.DownloadItem&' + sq_rplc + '>' + sq_crr[0] +'" height="75" width="75"></a>'; content = sq_str.r`enter code here`eplace(/&amp;/g, '&'); html+=content } // missing }???

    Read the article

  • java ioexception error=24 too many files open

    - by MattS
    I'm writing a genetic algorithm that needs to read/write lots of files. The fitness test for the GA is invoking a program called gradif, which takes a file as input and produces a file as output. Everything is working except when I make the population size and/or the total number of generations of the genetic algorithm too large. Then, after so many generations, I start getting this: java.io.FileNotFoundException: testfiles/GradifOut29 (Too many open files). (I get it repeatedly for many different files, the index 29 was just the one that came up first last time I ran it). It's strange because I'm not getting the error after the first or second generation, but after a significant amount of generations, which would suggest that each generation opens up more files that it doesn't close. But as far as I can tell I'm closing all of the files. The way the code is set up is the main() function is in the Population class, and the Population class contains an array of Individuals. Here's my code: Initial creation of input files (they're random access so that I could reuse the same file across multiple generations) files = new RandomAccessFile[popSize]; for(int i=0; i<popSize; i++){ files[i] = new RandomAccessFile("testfiles/GradifIn"+i, "rw"); } At the end of the entire program: for(int i=0; i<individuals.length; i++){ files[i].close(); } Inside the Individual's fitness test: FileInputStream fin = new FileInputStream("testfiles/GradifIn"+index); FileOutputStream fout = new FileOutputStream("testfiles/GradifOut"+index); Process process = Runtime.getRuntime().exec ("./gradif"); OutputStream stdin = process.getOutputStream(); InputStream stdout = process.getInputStream(); Then, later.... try{ fin.close(); fout.close(); stdin.close(); stdout.close(); process.getErrorStream().close(); }catch (IOException ioe){ ioe.printStackTrace(); } Then, afterwards, I append an 'END' to the files to make parsing them easier. FileWriter writer = new FileWriter("testfiles/GradifOut"+index, true); writer.write("END"); try{ writer.close(); }catch(IOException ioe){ ioe.printStackTrace(); } My redirection of stdin and stdout for gradif are from this answer. I tried using the try{close()}catch{} syntax to see if there was a problem with closing any of the files (there wasn't), and I got that from this answer. It should also be noted that the Individuals' fitness tests run concurrently. UPDATE: I've actually been able to narrow it down to the exec() call. In my most recent run, I first ran in to trouble at generation 733 (with a population size of 100). Why are the earlier generations fine? I don't understand why, if there's no leaking, the algorithm should be able to pass earlier generations but fail on later generations. And if there is leaking, then where is it coming from? UPDATE2: In trying to figure out what's going on here, I would like to be able to see (preferably in real-time) how many files the JVM has open at any given point. Is there an easy way to do that?

    Read the article

  • Postgresql fails to start on Ubuntu 10.04.4 LTS

    - by cancerballs
    I installed postgresql 9.2 from add-apt-repository ppa:pitti/postgresql using apt-get install postgresql-9.2 At the end of the install and every time I try to launch postgresql by using the following command /etc/init.d/postgresql start or service postgresql start I get this error: Error: could not exec /usr/lib/postgresql/9.2/bin/pg_ctl /usr/lib/postgresql/9.2/bin/pg_ctl start -D /var/lib/postgresql/9.2/main -l /var/log/postgresql/postgresql-9.2-main.log -s -o -c config_file="/etc/postgresql/9.2/main/postgresql.conf" : [fail] invoke-rc.d: initscript postgresql, action "start" failed. dpkg: error processing postgresql-9.2 (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: postgresql-9.2 E: Sub-process /usr/bin/dpkg returned an error code (1) I have tried everything found here: How to thoroughly purge and reinstall postgresql on ubuntu and here: Eliminating non working postgresql installations on ubuntu 10-04 and starting af. I have also done dpkg -P --force-remove-reinstreq postgresql-client-9.2 in my attempt to remove everything postgres related from my server. After removing postgresql I have used dpkg --get-selections | grep postg To be sure there is nothing left and I can do a clean install. I have also made sure that the files and folders mentioned in the error message have the right permissions. The /var/log/postgresql/postgresql-9.2-main.log file is empty. I have tried installing every postgresql version from 8.3 to 9.2 and I get the same error on every time. I once managed to compile postgresql from the source provided on their website but then I encountered weird errors with psycopg2 so I figured I'd install postgresql this way and avoid those errors. Also when I type apt-get install postgresql it by default tries to install the 8.3 version even when I can find the package by typing apt-get install postgresql-9.2.

    Read the article

  • Unable to install Eclipse manually

    - by veerendar
    I have just started Linux. I have a SBC(Atom processor) on which I have installed Ubuntu 12.04 and now I am trying to install Fortran IDE. For which I have learnt that I need to install OpenJDK first, then Eclipse Juno and at last the Phortran plugin for Eclipse. I have no Internet access so I had follow the below steps for manual installation. First download the eclipse tar.gz package (downloaded: eclipse-parallel-juno-linux-gtk.tar). Then right-click the eclipse tar.gz and choose the extract here option to extract the tar.gz package.You can also use the command line to extract the tar.gz package. # tar xzf eclipse-cpp-juno-linux-gtk.tar.gz Move to /opt/ folder. # mv eclipse /opt/ Use sudo if the above command gives permission denied message. # sudo mv eclipse /opt/ Create a desktop file and place it into /usr/share/applications # sudo gedit /usr/share/applications/eclipse.desktop and copy the following to the eclipse.desktop file [Desktop Entry] Name=Eclipse Type=Application Exec=/opt/eclipse/eclipse Terminal=false Icon=/opt/eclipse/icon.xpm Comment=Integrated Development Environment NoDisplay=false Categories=Development;IDE Name[en]=eclipse.desktop Create a symlink in /usr/local/bin using # cd /usr/local/bin # sudo ln -s /opt/eclipse/eclipse Now its the time to launch eclipse. # /opt/eclipse/eclipse -clean & Now at step 5, when I type the command sudo ln -s /opt/eclipse/eclipse , I get an this error message: ln: Failed to create symbolic link './eclipse': File exists. Please help me in resolving this.

    Read the article

  • PetaPoco with stored procedures

    - by Jalpesh P. Vadgama
    In previous post I have written that How we can use PetaPoco with the asp.net MVC. One of my dear friend Kirti asked me that How we can use it with Stored procedure. So decided to write a small post for that. So let’s first create a simple stored procedure for customer table which I have used in my previous post. I have written  simple code a single query that will return customers. Following is a code for that. CREATE PROCEDURE mysp_GetCustomers AS SELECT * FROM [dbo].Customer Now our stored procedure is ready so I just need to change my CustomDB file from the my previous post example like following. using System.Collections.Generic; namespace CodeSimplified.Models { public class CustomerDB { public IEnumerable<Customer> GetCustomers() { var databaseContext = new PetaPoco.Database("MyConnectionString"); return databaseContext.Query<Customer>("exec mysp_GetCustomers"); } } } That's It. Now It's time to run this in browser and Here is the output In future post I will explain How we can use PetaPoco with parameterised stored procedure. Hope you liked it.. Stay tuned for more.. Happy programming.

    Read the article

  • Automating Solaris 11 Zones Installation Using The Automated Install Server

    - by Orgad Kimchi
    Introduction How to use the Oracle Solaris 11 Automated install server in order to automate the Solaris 11 Zones installation. In this document I will demonstrate how to setup the Automated Install server in order to provide hands off installation process for the Global Zone and two Non Global Zones located on the same system. Architecture layout: Figure 1. Architecture layout Prerequisite Setup the Automated install server (AI) using the following instructions “How to Set Up Automated Installation Services for Oracle Solaris 11” The first step in this setup will be creating two Solaris 11 Zones configuration files. Step 1: Create the Solaris 11 Zones configuration files  The Solaris Zones configuration files should be in the format of the zonecfg export command. # zonecfg -z zone1 export > /var/tmp/zone1# cat /var/tmp/zone1 create -b set brand=solaris set zonepath=/rpool/zones/zone1 set autoboot=true set ip-type=exclusive add anet set linkname=net0 set lower-link=auto set configure-allowed-address=true set link-protection=mac-nospoof set mac-address=random end  Create a backup copy of this file under a different name, for example, zone2. # cp /var/tmp/zone1 /var/tmp/zone2 Modify the second configuration file with the zone2 configuration information You should change the zonepath for example: set zonepath=/rpool/zones/zone2 Step2: Copy and share the Zones configuration files  Create the NFS directory for the Zones configuration files # mkdir /export/zone_config Share the directory for the Zones configuration file # share –o ro /export/zone_config Copy the Zones configuration files into the NFS shared directory # cp /var/tmp/zone1 /var/tmp/zone2  /export/zone_config Verify that the NFS share has been created using the following command # share export_zone_config      /export/zone_config     nfs     sec=sys,ro Step 3: Add the Global Zone as client to the Install Service Use the installadm create-client command to associate client (Global Zone) with the install service To find the MAC address of a system, use the dladm command as described in the dladm(1M) man page. The following command adds the client (Global Zone) with MAC address 0:14:4f:2:a:19 to the s11x86service install service. # installadm create-client -e “0:14:4f:2:a:19" -n s11x86service You can verify the client creation using the following command # installadm list –c Service Name  Client Address     Arch   Image Path ------------  --------------     ----   ---------- s11x86service 00:14:4F:02:0A:19  i386   /export/auto_install/s11x86service We can see the client install service name (s11x86service), MAC address (00:14:4F:02:0A:19 and Architecture (i386). Step 4: Global Zone manifest setup  First, get a list of the installation services and the manifests associated with them: # installadm list -m Service Name   Manifest        Status ------------   --------        ------ default-i386   orig_default   Default s11x86service  orig_default   Default Then probe the s11x86service and the default manifest associated with it. The -m switch reflects the name of the manifest associated with a service. Since we want to capture that output into a file, we redirect the output of the command as follows: # installadm export -n s11x86service -m orig_default >  /var/tmp/orig_default.xml Create a backup copy of this file under a different name, for example, orig-default2.xml, and edit the copy. # cp /var/tmp/orig_default.xml /var/tmp/orig_default2.xml Use the configuration element in the AI manifest for the client system to specify non-global zones. Use the name attribute of the configuration element to specify the name of the zone. Use the source attribute to specify the location of the config file for the zone.The source location can be any http:// or file:// location that the client can access during installation. The following sample AI manifest specifies two Non-Global Zones: zone1 and zone2 You should replace the server_ip with the ip address of the NFS server. <!DOCTYPE auto_install SYSTEM "file:///usr/share/install/ai.dtd.1"> <auto_install>   <ai_instance>     <target>       <logical>         <zpool name="rpool" is_root="true">           <filesystem name="export" mountpoint="/export"/>           <filesystem name="export/home"/>           <be name="solaris"/>         </zpool>       </logical>     </target>     <software type="IPS">       <source>         <publisher name="solaris">           <origin name="http://pkg.oracle.com/solaris/release"/>         </publisher>       </source>       <software_data action="install">         <name>pkg:/entire@latest</name>         <name>pkg:/group/system/solaris-large-server</name>       </software_data>     </software>     <configuration type="zone" name="zone1" source="file:///net/server_ip/export/zone_config/zone1"/>     <configuration type="zone" name="zone2" source="file:///net/server_ip/export/zone_config/zone2"/>   </ai_instance> </auto_install> The following example adds the /var/tmp/orig_default2.xml AI manifest to the s11x86service install service # installadm create-manifest -n s11x86service -f /var/tmp/orig_default2.xml -m gzmanifest You can verify the manifest creation using the following command # installadm list -n s11x86service  -m Service/Manifest Name  Status   Criteria ---------------------  ------   -------- s11x86service    orig_default        Default  None    gzmanifest          Inactive None We can see from the command output that the new manifest named gzmanifest has been created and associated with the s11x86service install service. Step 5: Non Global Zone manifest setup The AI manifest for non-global zone installation is similar to the AI manifest for installing the global zone. If you do not provide a custom AI manifest for a non-global zone, the default AI manifest for Zones is used The default AI manifest for Zones is available at /usr/share/auto_install/manifest/zone_default.xml. In this example we should use the default AI manifest for zones The following sample default AI manifest for zones # cat /usr/share/auto_install/manifest/zone_default.xml <?xml version="1.0" encoding="UTF-8"?> <!--  Copyright (c) 2011, 2012, Oracle and/or its affiliates. All rights reserved. --> <!DOCTYPE auto_install SYSTEM "file:///usr/share/install/ai.dtd.1"> <auto_install>     <ai_instance name="zone_default">         <target>             <logical>                 <zpool name="rpool">                     <!--                       Subsequent <filesystem> entries instruct an installer                       to create following ZFS datasets:                           <root_pool>/export         (mounted on /export)                           <root_pool>/export/home    (mounted on /export/home)                       Those datasets are part of standard environment                       and should be always created.                       In rare cases, if there is a need to deploy a zone                       without these datasets, either comment out or remove                       <filesystem> entries. In such scenario, it has to be also                       assured that in case of non-interactive post-install                       configuration, creation of initial user account is                       disabled in related system configuration profile.                       Otherwise the installed zone would fail to boot.                     -->                     <filesystem name="export" mountpoint="/export"/>                     <filesystem name="export/home"/>                     <be name="solaris">                         <options>                             <option name="compression" value="on"/>                         </options>                     </be>                 </zpool>             </logical>         </target>         <software type="IPS">             <destination>                              </destination>             <software_data action="install">                 <name>pkg:/group/system/solaris-small-server</name>             </software_data>         </software>     </ai_instance> </auto_install> (optional) We can customize the default AI manifest for Zones Create a backup copy of this file under a different name, for example, zone_default2.xml and edit the copy # cp /usr/share/auto_install/manifest/zone_default.xml /var/tmp/zone_default2.xml Edit the copy (/var/tmp/zone_default2.xml) The following example adds the /var/tmp/zone_default2.xml AI manifest to the s11x86service install service and specifies that zone1 and zone2 should use this manifest. # installadm create-manifest -n s11x86service -f /var/tmp/zone_default2.xml -m zones_manifest -c zonename="zone1 zone2" Note: Do not use the following elements or attributes in a non-global zone AI manifest:     The auto_reboot attribute of the ai_instance element     The http_proxy attribute of the ai_instance element     The disk child element of the target element     The noswap attribute of the logical element     The nodump attribute of the logical element     The configuration element Step 6: Global Zone profile setup We are going to create a global zone configuration profile which includes the host information for example: host name, ip address name services etc… # sysconfig create-profile –o /var/tmp/gz_profile.xml You need to provide the host information for example:     Default router     Root password     DNS information The output should eventually disappear and be replaced by the initial screen of the System Configuration Tool (see Figure 2), where you can do the final configuration. Figure 2. Profile creation menu You can validate the profile using the following command # installadm validate -n s11x86service –P /var/tmp/gz_profile.xml Validating static profile gz_profile.xml...  Passed Next, instantiate a profile with the install service. In our case, use the following syntax for doing this # installadm create-profile -n s11x86service  -f /var/tmp/gz_profile.xml -p  gz_profile You can verify profile creation using the following command # installadm list –n s11x86service  -p Service/Profile Name  Criteria --------------------  -------- s11x86service    gz_profile         None We can see that the gz_profie has been created and associated with the s11x86service Install service. Step 7: Setup the Solaris Zones configuration profiles The step should be similar to the Global zone profile creation on step 6 # sysconfig create-profile –o /var/tmp/zone1_profile.xml # sysconfig create-profile –o /var/tmp/zone2_profile.xml You can validate the profiles using the following command # installadm validate -n s11x86service -P /var/tmp/zone1_profile.xml Validating static profile zone1_profile.xml...  Passed # installadm validate -n s11x86service -P /var/tmp/zone2_profile.xml Validating static profile zone2_profile.xml...  Passed Next, associate the profiles with the install service The following example adds the zone1_profile.xml configuration profile to the s11x86service  install service and specifies that zone1 should use this profile. # installadm create-profile -n s11x86service  -f  /var/tmp/zone1_profile.xml -p zone1_profile -c zonename=zone1 The following example adds the zone2_profile.xml configuration profile to the s11x86service  install service and specifies that zone2 should use this profile. # installadm create-profile -n s11x86service  -f  /var/tmp/zone2_profile.xml -p zone2_profile -c zonename=zone2 You can verify the profiles creation using the following command # installadm list -n s11x86service -p Service/Profile Name  Criteria --------------------  -------- s11x86service    zone1_profile      zonename = zone1    zone2_profile      zonename = zone2    gz_profile         None We can see that we have three profiles in the s11x86service  install service     Global Zone  gz_profile     zone1            zone1_profile     zone2            zone2_profile. Step 8: Global Zone setup Associate the global zone client with the manifest and the profile that we create in the previous steps The following example adds the manifest and profile to the client (global zone), where: gzmanifest  is the name of the manifest. gz_profile  is the name of the configuration profile. mac="0:14:4f:2:a:19" is the client (global zone) mac address s11x86service is the install service name. # installadm set-criteria -m  gzmanifest  –p  gz_profile  -c mac="0:14:4f:2:a:19" -n s11x86service You can verify the manifest and profile association using the following command # installadm list -n s11x86service -p  -m Service/Manifest Name  Status   Criteria ---------------------  ------   -------- s11x86service    gzmanifest                   mac  = 00:14:4F:02:0A:19    orig_default        Default  None Service/Profile Name  Criteria --------------------  -------- s11x86service    gz_profile         mac      = 00:14:4F:02:0A:19    zone2_profile      zonename = zone2    zone1_profile      zonename = zone1 Step 9: Provision the host with the Non-Global Zones The next step is to boot the client system off the network and provision it using the Automated Install service that we just set up. First, boot the client system. Figure 3 shows the network boot attempt (when done on an x86 system): Figure 3. Network Boot Then you will be prompted by a GRUB menu, with a timer, as shown in Figure 4. The default selection (the "Text Installer and command line" option) is highlighted.  Press the down arrow to highlight the second option labeled Automated Install, and then press Enter. The reason we need to do this is because we want to prevent a system from being automatically re-installed if it were to be booted from the network accidentally. Figure 4. GRUB Menu What follows is the continuation of a networked boot from the Automated Install server,. The client downloads a mini-root (a small set of files in which to successfully run the installer), identifies the location of the Automated Install manifest on the network, retrieves that manifest, and then processes it to identify the address of the IPS repository from which to obtain the desired software payload. Non-Global Zones are installed and configured on the first reboot after the Global Zone is installed. You can list all the Solaris Zones status using the following command # zoneadm list -civ Once the Zones are in running state you can login into the Zone using the following command # zlogin –z zone1 Troubleshooting Automated Installations If an installation to a client system failed, you can find the client log at /system/volatile/install_log. NOTE: Zones are not installed if any of the following errors occurs:     A zone config file is not syntactically correct.     A collision exists among zone names, zone paths, or delegated ZFS datasets in the set of zones to be installed     Required datasets are not configured in the global zone. For more troubleshooting information see “Installing Oracle Solaris 11 Systems” Conclusion This paper demonstrated the benefits of using the Automated Install server to simplify the Non Global Zones setup, including the creation and configuration of the global zone manifest and the Solaris Zones profiles.

    Read the article

  • SQL SERVER – Disk Space Monitoring – Detecting Low Disk Space on Server

    - by Pinal Dave
    A very common question I often receive is how to detect if the disk space is running low on SQL Server. There are two different ways to do the same. I personally prefer method 2 as that is very easy to use and I can use it creatively along with database name. Method 1: EXEC MASTER..xp_fixeddrives GO Above query will return us two columns, drive name and MB free. If we want to use this data in our query, we will have to create a temporary table and insert the data from this stored procedure into the temporary table and use it. Method 2: SELECT DISTINCT dovs.logical_volume_name AS LogicalName, dovs.volume_mount_point AS Drive, CONVERT(INT,dovs.available_bytes/1048576.0) AS FreeSpaceInMB FROM sys.master_files mf CROSS APPLY sys.dm_os_volume_stats(mf.database_id, mf.FILE_ID) dovs ORDER BY FreeSpaceInMB ASC GO The above query will give us three columns: drive logical name, drive letter and free space in MB. We can further modify above query to also include database name in the query as well. SELECT DISTINCT DB_NAME(dovs.database_id) DBName, dovs.logical_volume_name AS LogicalName, dovs.volume_mount_point AS Drive, CONVERT(INT,dovs.available_bytes/1048576.0) AS FreeSpaceInMB FROM sys.master_files mf CROSS APPLY sys.dm_os_volume_stats(mf.database_id, mf.FILE_ID) dovs ORDER BY FreeSpaceInMB ASC GO This will give us additional data about which database is placed on which drive. If you see a database name multiple times, it is because your database has multiple files and they are on different drives. You can modify above query one more time to even include the details of actual file location. SELECT DISTINCT DB_NAME(dovs.database_id) DBName, mf.physical_name PhysicalFileLocation, dovs.logical_volume_name AS LogicalName, dovs.volume_mount_point AS Drive, CONVERT(INT,dovs.available_bytes/1048576.0) AS FreeSpaceInMB FROM sys.master_files mf CROSS APPLY sys.dm_os_volume_stats(mf.database_id, mf.FILE_ID) dovs ORDER BY FreeSpaceInMB ASC GO The above query will now additionally include the physical file location as well. As I mentioned earlier, I prefer method 2 as I can creatively use it as per the business need. Let me know which method are you using in your production server. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • flickr, other account types not appearing in online-accounts

    - by Fen
    Using Shotwell, I discovered that to publish to Flickr I need to set up an online account. But the online-accounts system settings only has support for Google, Facebook, Windows Live, Microsoft Exchange and Enterprise Login (Kerberos). How do I add account types? These appear to be properly installed (dpkg-reconfigure returns silently): gnome-control-center-signon is already the newest version. account-plugin-yahoo is already the newest version. account-plugin-flickr is already the newest version. Here's the config file (I think): > cat /usr/share/applications/gnome-online-accounts-panel.desktop [Desktop Entry] Name=Online Accounts Comment=Manage online accounts Exec=gnome-control-center online-accounts Icon=goa-panel Terminal=false Type=Application StartupNotify=true Categories=GNOME;GTK;Settings;DesktopSettings;X-GNOME-Settings-Panel;X-GNOME-PersonalSettings; OnlyShowIn=GNOME;XFCE X-GNOME-Bugzilla-Bugzilla=GNOME X-GNOME-Bugzilla-Product=gnome-control-center X-GNOME-Bugzilla-Component=Online Accounts X-GNOME-Bugzilla-Version=3.4.2 X-GNOME-Settings-Panel=online-accounts # Translators: those are keywords for the online-accounts control-center panel Keywords=Google;Facebook;Flickr;Twitter;Yahoo;Web;Online;Chat;Calendar;Mail;Contact; X-Ubuntu-Gettext-Domain=gnome-control-center-2.0 History: Started out with Ubuntu (64-bit), then in 12.04 installed xubuntu-desktop and have been using that. Upgraded to 12.10.

    Read the article

  • Re-running SSRS subscription jobs that have failed

    - by Rob Farley
    Sometimes, an SSRS subscription for some reason. It can be annoying, particularly as the appropriate response can be hard to see immediately. There may be a long list of jobs that failed one morning if a Mail Server is down, and trying to work out a way of running each one again can be painful. It’s almost an argument for using shared schedules a lot, but the problem with this is that there are bound to be other things on that shared schedule that you wouldn’t want to be re-run. Luckily, there’s a table in the ReportServer database called dbo.Subscriptions, which is where LastStatus of the Subscription is stored. Having found the subscriptions that you’re interested in, finding the SQL Agent Jobs that correspond to them can be frustrating. Luckily, the jobstep command contains the subscriptionid, so it’s possible to look them up based on that. And of course, once the jobs have been found, they can be executed easily enough. In this example, I produce a list of the commands to run the jobs. I can copy the results out and execute them. select 'exec sp_start_job @job_name = ''' + cast(j.name as varchar(40)) + '''' from msdb.dbo.sysjobs j  join  msdb.dbo.sysjobsteps js on js.job_id = j.job_id join  [ReportServer].[dbo].[Subscriptions] s  on js.command like '%' + cast(s.subscriptionid as varchar(40)) + '%' where s.LastStatus like 'Failure sending mail%'; Another option could be to return the job step commands directly (js.command in this query), but my preference is to run the job that contains the step. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Notify-osd notifications appear unthemed in top-left corner (ubuntu 13.10)

    - by Wehlutyk
    Problem I recently upgraded from 13.04 to 13.10, and suddenly notification bubbles don't appear themed as usual in the upper right corner, but they appear as white text on blue background in the upper-left corner. It looks like this: Unsuccesful attempts to fix it I tried reinstalling unity, notify-osd, ubuntu-desktop removed notification-daemon which was installed, none of that fixes it. In fact running ps aux | grep notify-osd shows that notify-osd isn't even running. But when I try to start it manually by running /usr/lib/x86_64-linux-gnu/notify-osd I get: ** (notify-osd:4618): WARNING **: Another instance has already registered org.freedesktop.Notifications ** (notify-osd:4618): WARNING **: Could not register instance If I understand well, the instance is registered by the /usr/share/dbus-1/services/org.freedesktop.Notifications.service file, which right now contains: [D-BUS Service] Name=org.freedesktop.Notifications Exec=/usr/lib/x86_64-linux-gnu/notify-osd Renaming or deleting that file (and rebooting) has no effect whatsoever (and it is not recreated automatically). This is not a duplicate of No notifications from notify-osd on 13.10 (and by the way I purged gnome-flashback-session along with notification-daemon) Question(s) How can I debug this? How can I get notifications to come back to normal? If additional debug information is needed, I'll be happy to add it (just that I can't find any more).

    Read the article

  • After restoring a SQL Server database from another server - get login fails

    - by Renso
    Issue: After you have restored a sql server database from another server, lets say from production to a Q/A environment, you get the "Login Fails" message for your service account. Reason: User logon information is stored in the syslogins table in the master database. By changing servers, or by altering this information by rebuilding or restoring an old version of the master database, the information may be different from when the user database dump was created. If logons do not exist for the users, they will receive an error indicating "Login failed" while attempting to log on to the server. If the user logons do exist, but the SUID values (for 6.x) or SID values (for 7.0) in master..syslogins and the sysusers table in the user database differ, the users may have different permissions than expected in the user database. Solution: Links a user entry in the sys.database_principals system catalog view in the current database to a SQL Server login of the same name. If a login with the same name does not exist, one will be created. Examine the result from the Auto_Fix statement to confirm that the correct link is in fact made. Avoid using Auto_Fix in security-sensitive situations. When you use Auto_Fix, you must specify user and password if the login does not already exist, otherwise you must specify user but password will be ignored. login must be NULL. user must be a valid user in the current database. The login cannot have another user mapped to it. execute the following stored procedure, in this example the login user name is "MyUser" exec sp_change_users_login 'Auto_Fix', 'MyUser'   NOTE: sp_change_users_login cannot be used with a SQL Server login created from a Windows principal or with a user created by using CREATE USER WITHOUT LOGIN.

    Read the article

  • Eclipse dissapears when minimized and will not come back (12.04 Unity)

    - by Kevin
    I have Ubuntu 12.04 x64 running Unity3d. I downloaded Eclipse from eclipse.org (not the software center) and created a desktop using gnome-desktop-item-edit. The resulting file is below, and I added it to the launcher on the left of the screen by dragging that file on. #!/usr/bin/env xdg-open [Desktop Entry] Version=1.0 Type=Application Terminal=false Icon[en_US]=/home/kevin/eclipse/icon.xpm Name[en_US]=Eclipse Exec=/home/kevin/eclipse/eclipse Name=Eclipse Icon=/home/kevin/eclipse/icon.xpm#!/usr/bin/env xdg-open However, when I minimize eclipse, eclipse disappears. There is no arrow to the left of the icon in the launcher like usual. And when I click on the launcher again, it tries to relaunch eclipse instead of bringing back the one that was minimized. Eclipse also does not show up when I alt-tab. I know it is still running because I can see it running with the system monitor. Note that Eclipse works properly until it is minimized. I have observed this behavior on two different computers now. Does anyone know how to fix it?

    Read the article

  • SQL SERVER – Get Directory Structure using Extended Stored Procedure xp_dirtree

    - by pinaldave
    Many years ago I wrote article SQL SERVER – Get a List of Fixed Hard Drive and Free Space on Server where I demonstrated using undocumented Stored Procedure to find the drive letter in local system and available free space. I received question in email from reader asking if there any way he can list directory structure within the T-SQL. When I inquired more he suggested that he needs this because he wanted set up backup of the data in certain structure. Well, there is one undocumented stored procedure exists which can do the same. However, please be vary to use any undocumented procedures. xp_dirtree 'C:\Windows' Execution of the above stored procedure will give following result. If you prefer you can insert the data in the temptable and use the same for further use. Here is the quick script which will insert the data into the temptable and retrieve from the same. CREATE TABLE #TempTable (Subdirectory VARCHAR(512), Depth INT); INSERT INTO #TempTable (Subdirectory, Depth) EXEC xp_dirtree 'C:\Windows' SELECT Subdirectory, Depth FROM #TempTable; DROP TABLE #TempTable; Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Stored Procedure, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • My experience working with Teradata SQL Assistant

    - by Kevin Shyr
    Originally posted on: http://geekswithblogs.net/LifeLongTechie/archive/2014/05/28/my-experience-working-with-teradata-sql-assistant.aspx To this date, I still haven't figure out how to "toggle" between my query windows. It seems like unless I click on that "new" button on top, whatever SQL I generate from right-click just overrides the current SQL in the window. I'm probably missing a "generate new sql in new window" setting The default Teradata SQL Assistant doesn't execute just the SQL query I highlighted. There is a setting I have to change first. I'm not really happy that the SQL assistant and SQL admin are different app. Still trying to get used to the fact that I can't quickly look up a table's keys/relationships while writing query. I have to switch between windows. LOVE the execution plan / explanation. I think that part is better done than MS SQL in some ways. The error messages can be better. I feel that Teradata .NET provider sends smaller query command over than others. I don't have any hard data to support my claim. One of my query in SSRS was passing multi-valued parameters to another query, and got error "Teradata 3577 row size or sort key size overflow". The search on this error says the solution is to cast result column into smaller data type, but I found that the problem was that the parameter passed into the where clause could not be too large. I wish Teradata SQL Assistant would remember the window size I just adjusted to. Every time I execute the query, the result set, query, and exec log auto re-adjust back to the default size. In SSMS, if I adjust the result set area to be smaller, it would stay like that if I execute query in the same window.

    Read the article

  • Unable to set my own icon for launcher item in 12.04

    - by Alex K
    I use the Faenza icon collection in Ubuntu 12.04 Unity with no issues. I decided to change my Gimp launcher icon, so I made my own (gimp-ak.png) and added it, and its appropriately sized derivatives, to the Faenza icon folders: /usr/share/icons/Faenza/apps/16/gimp-ak.png /usr/share/icons/Faenza/apps/22/gimp-ak.png /usr/share/icons/Faenza/apps/24/gimp-ak.png /usr/share/icons/Faenza/apps/32/gimp-ak.png /usr/share/icons/Faenza/apps/48/gimp-ak.png /usr/share/icons/Faenza/apps/64/gimp-ak.png /usr/share/icons/Faenza/apps/96/gimp-ak.png /usr/share/icons/Faenza/apps/scalable/gimp-ak.svg I then updated the Icon field in /usr/share/applications/gimp.desktop from "gimp" to "gimp-ak": [Desktop Entry] Version=1.0 Type=Application Name=GIMP Image Editor GenericName=Image Editor Comment=Create images and edit photographs Exec=gimp-2.6 %U TryExec=gimp-2.6 Icon=gimp-ak Terminal=false Categories=Graphics;2DGraphics;RasterGraphics;GTK; X-GNOME-Bugzilla-Bugzilla=GNOME X-GNOME-Bugzilla-Product=GIMP X-GNOME-Bugzilla-Component=General X-GNOME-Bugzilla-Version=2.6.12 X-GNOME-Bugzilla-OtherBinaries=gimp-2.6 StartupNotify=true MimeType=application/postscript;application/pdf;image/bmp;image/g3fax;image/gif;image/x-$ X-Ubuntu-Gettext-Domain=gimp20 After logging off (and even restarting), my custom icon does not show up - Gimp has the default gear icon: Setting the Icon field in gimp.desktop to any other icon in the Faenza collection works fine. What do I need to do to get my custom icon to show up properly?

    Read the article

  • Oracle Linux / Symantec Partnership

    - by Ted Davis
    Fred Astaire and Ginger Rogers sang the now famous lyrics:  “You like to-may-toes and I like to-mah-toes”. In the tech world, is it Semantic or is it Symantec? Ah, well, we know it’s the latter. Actually, who doesn’t know or hasn’t heard of Symantec in the tech world? Symantec is thoroughly engrained in Enterprise customer infrastructure from their Storage Foundation Suite to their Anti-Virus products. It would be hard to find anyone who doesn’t use their software. Likewise, Oracle Linux is thoroughly engrained in Enterprise infrastructure – so our paths cross quite a bit. This is why the Oracle Linux  engineering team works with Symantec to make sure their applications and agents are supported on Oracle Linux. We also want to make sure the Oracle Linux / Symantec customer experience is trouble free so customer work continues at the same blistering pace. Here are a few Symantec applications that are supported on Oracle Linux: Storage Foundation Netbackup Enterprise Server Symantec Antivirus For Linux Veritas Cluster Server Backup Exec Agent for Linux So, while Fred and Ginger may disagree on how to spell tomato, for our software customers, the Oracle / Symantec partnership works together so our joint customers experience and hear the sweet song of success.

    Read the article

  • How to set Guake as preferred terminal emulator for "Run in Terminal"?

    - by d3vid
    I've recently installed Guake and like it a lot. I'd like to set it as my preferred terminal application. That is, when I right-click on a bash script file, click "Open", and choose "Run in Terminal", I want it to open in a new Guake tab. I'm not sure where to set Guake as the preferred app for "Run in Terminal". And I'm guessing that I might need the command to be something like guake --new-tab=new --execute-command="COMMANDHERE", so how do I pass that parameter? Ideally, I'd like a terminal invocation to open a new Guake tab, unless there is already one available. (Difficult to tell, what id there's already a command running in the existing tab?) Failing that, just opening a new Guake tab is ok. Also, is there an option to keep Guake hidden when this happens? Already tried: Based on How can I set default terminal used? I have already tried: gconftool --type string --set /desktop/gnome/applications/terminal/exec guake - this made Guake appear when I type Ctrl-Alt-T. setting x-terminal-emulator to \usr\bin\guake in Alternatives Configurator - this made no difference (having already made the previous change).

    Read the article

  • PCI-e SEALEVEL dual serial card not recognized on Ubuntu 14.04

    - by kwhunter
    New to Ubuntu, running 14.04. I have trouble with the serial port setup, I have a plug-in PCI-e SEALEVEL dual serial card that is not recognized. user1@WSIWORKSTATION2:~$ dmesg | grep tty [ 0.000000] console [tty0] enabled [ 1.577197] 00:06: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A [ 1.943326] tty ttyprintk: hash matches [ 17.240880] ttyS4: LSR safety check engaged! [ 17.241589] ttyS4: LSR safety check engaged! [ 17.242354] ttyS5: LSR safety check engaged! [ 17.243058] ttyS5: LSR safety check engaged! [ 17.243918] ttyS6: LSR safety check engaged! [ 17.244485] ttyS6: LSR safety check engaged! [ 17.245195] ttyS7: LSR safety check engaged! [ 17.245830] ttyS7: LSR safety check engaged! [ 17.246554] ttyS8: LSR safety check engaged! [ 17.247191] ttyS8: LSR safety check engaged! user1@WSIWORKSTATION2:~/Desktop/seacom$ lspci -d 135e: -vvv 02:04.0 Bridge: Sealevel Systems Inc Device e205 (rev aa) Subsystem: Sealevel Systems Inc Device e205 Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV+ VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx- Status: Cap+ 66MHz+ UDF- FastB2B+ ParErr- DEVSEL=medium TAbort- SERR- Capabilities: However, the manufacturer's drivers won't install: 1. user1@WSIWORKSTATION2:~/Desktop/seacom$ make install ---------------------------------------------------------------- Installing seacom suite. ---------------------------------------------------------------- --Installing utilities-- /usr/bin/ld: skipping incompatible ../../lib/libftd2xx.a when searching for -lftd2xx /usr/bin/ld: cannot find -lftd2xx collect2: error: ld returned 1 exit status make[2]: *** [setusb] Error 1 make[1]: *** [install] Error 1 make: *** [install] Error 1 2. user1@WSIWORKSTATION2:~/Desktop/seaio$ make install ---------------------------------------------------------------- Installing SD suite. ---------------------------------------------------------------- *Compiling SeaIO Library source file: digitalIoDevice.cpp gcc: error trying to exec 'cc1plus': execvp: No such file or directory make[1]: *** [digitalIoDevice.o] Error 1 make: *** [install] Error 1 I was not able to find a solution, so if anyone can help it will be much appreciated. PS Have some troubles with formatting my post, apologies...

    Read the article

  • Metacity malfunction preventing custom Gnome session from launching?

    - by QuietThud
    When I try to run Metacity in Ubuntu2D(12.04), I get the following message: alisa@ubuntu:~$ metacity Window manager warning: Screen 0 on display ":2.0" already has a window manager; try using the --replace option to replace the current window manager. I get the same message when running Compiz from the command line in 3D (it opens fine through the GUI (same thing for AWN)). I understand that these should be the default managers for the respective sessions. I'm trying to create a custom Gnome session using the following instructions: unity launcher-free session. Here is what I've put into my .session file: [GNOME Session] Name=Custom Unity2D Session RequiredComponents=gnome-settings-daemon; RequiredProviders=windowmanager;panel; DefaultProvider-windowmanager=metacity DefaultProvider-panel=unity-2d-panel FallbackSession=ubuntu-2d DesktopName=GNOME Since I'm having problems identifying my default, and the code refers to Metacity, I figured this may be relevant to my inability to load the custom session (it shows up on my login screen, but won't launch). I tried specifying Metacity as my default manager by adding exec metacity to the .xinitrc file, and I tried running metacity --replace, but neither worked. How do I determine my current default window manager, what should the default be, and how do I re-assign it? Also, please let me know if you think there may be other issues affecting my custom session. I am new to Linux, so list anything you think might be helpful. Thank you!

    Read the article

  • mediatomb fails with "respawning too fast, stopped"

    - by felix
    When I try to start mediatomb it fails. I see this in dmesg [...] [916349.374331] init: mediatomb main process ended, respawning [916349.394462] init: mediatomb main process (880) terminated with status 1 [916349.394512] init: mediatomb main process ended, respawning [916349.414598] init: mediatomb main process (882) terminated with status 1 [916349.414647] init: mediatomb respawning too fast, stopped My current /etc/init/mediatomb.conf looks like this. description "MediaTomb UPnP media server" author "Daniel van Vugt <vanvugt in launchpad>" start on (local-filesystems and net-device-up IFACE!=lo) stop on runlevel [!2345] respawn env CONFIGXML=/etc/mediatomb/config.xml env LOGFILE=/var/log/mediatomb.log env DEFAULT=/etc/default/mediatomb script [ -r $DEFAULT ] && . $DEFAULT [ ! $USER ] && USER=root [ ! $GROUP ] && GROUP=$USER if [ -n "$INTERFACE" ]; then INTERFACE_ARG="-e $INTERFACE" $ROUTE_ADD $INTERFACE fi exec mediatomb \ -c $CONFIGXML \ -u $USER \ -g $GROUP \ -l $LOGFILE \ $INTERFACE_ARG \ $OPTIONS end script post-stop script [ -r $DEFAULT ] && . $DEFAULT if [ -n "$INTERFACE" ]; then $ROUTE_DEL $INTERFACE fi end script

    Read the article

  • Introduction to Extended Events

    - by extended_events
    For those fighting with all the Extended Event terminology, let's step back and have a small overall Introduction to Extended Events. This post will give you a simplified end to end view through some of the elements in Extended Events. Before we start, let’s review the first Extented Events that we are going to use: -          Events: The SQL Server code is populated with event calls that, by default, are disabled. Adding events to a session enables those event calls. Once enabled, they will execute the set of functionality defined by the session. -          Target: This is an Extended Event Object that can be used to log event information. Also it is important to understand the following Extended Event concept: -          Session: Server Object created by the user that defines functionality to be executed every time a set of events happen.   It’s time to write a small “Hello World” using Extended Events. This will help understand the above terms. We will use: -          Event sqlserver. error_reported: This event gets fired every time that an error happens in the server. -          Target package0.asynchronous_file_target: This target stores the event data in disk. -          Session: We will create a session that sends all the error_reported events to the ring buffer. Before we get started, a quick note: Don’t run this script in a production environment. Even though, we are going just going to be raise very low severity user errors, we don't want to introduce noise in our servers. -- TRIES TO ELIMINATE PREVIOUS SESSIONS BEGIN TRY       DROP EVENT SESSION test_session ON SERVER END TRY BEGIN CATCH END CATCH GO   -- CREATES THE SESSION CREATE EVENT SESSION test_session ON SERVER ADD EVENT sqlserver.error_reported ADD TARGET package0.asynchronous_file_target -- CONFIGURES THE FILE TARGET (set filename = 'c:\temp\data1.xel' , metadatafile = 'c:\temp\data1.xem') GO   -- STARTS THE SESSION ALTER EVENT SESSION test_session ON SERVER STATE = START GO   -- GENERATES AN ERROR RAISERROR (N'HELLO WORLD', -- Message text.            1, -- Severity,            1, 7, 3, N'abcde'); -- Other parameters GO   -- STOPS LISTENING FOR THE EVENT ALTER EVENT SESSION test_session ON SERVER STATE = STOP GO   -- REMOVES THE EVENT SESSION FROM THE SERVER DROP EVENT SESSION test_session ON SERVER GO -- REMOVES THE EVENT SESSION FROM THE SERVER select CAST(event_data as XML) as event_data from sys.fn_xe_file_target_read_file ('c:\temp\data1*.xel','c:\temp\data1*.xem', null, null) This query will output the event data with our first hello world in the Extended Event format: <event name="error_reported" package="sqlserver" id="100" version="1" timestamp="2010-02-27T03:08:04.210Z"><data name="error"><value>50000</value><text /></data><data name="severity"><value>1</value><text /></data><data name="state"><value>1</value><text /></data><data name="user_defined"><value>true</value><text /></data><data name="message"><value>HELLO WORLD</value><text /></data></event> More on parsing event data in this post: Reading event data 101 Now let's move that lets move on to the other three Extended Event objects: -          Actions. This Extended Objects actions get executed before events are published (stored in buffers to be transferred to the targets). Currently they are used additional data (like the TSQL Statement related to an event, the session, the user) or generate a mini dump.   -          Predicates: Predicates express are logical expressions that specify what predicates to fire (E.g. only listen to errors with a severity greater than 16). This are composed of two Extended Objects: o   Predicate comparators: Defines an operator for a pair of values. Examples: §  Severity > 16 §  error_message = ‘Hello World!!’ o   Predicate sources: These are values that can be also used by the predicates. They are generic data that isn’t usually provided in the event (similar to the actions). §  Sqlserver.username = ‘Tintin’ As logical expressions they can be combined using logical operators (and, or, not).  Note: This pair always has to be first an event field or predicate source and then a value         Let’s do another small Example. We will trigger errors but we will use the ones that have severity >= 10 and the error message != ‘filter’. To verify this we will use the action sql_text that will attach the sql statement to the event data: -- TRIES TO ELIMINATE PREVIOUS SESSIONS BEGIN TRY       DROP EVENT SESSION test_session ON SERVER END TRY BEGIN CATCH END CATCH GO   -- CREATES THE SESSION CREATE EVENT SESSION test_session ON SERVER ADD EVENT sqlserver.error_reported       (ACTION (sqlserver.sql_text) WHERE severity = 2 and (not (message = 'filter'))) ADD TARGET package0.asynchronous_file_target -- CONFIGURES THE FILE TARGET (set filename = 'c:\temp\data2.xel' , metadatafile = 'c:\temp\data2.xem') GO   -- STARTS THE SESSION ALTER EVENT SESSION test_session ON SERVER STATE = START GO   -- THIS EVENT WILL BE FILTERED BECAUSE SEVERITY != 2 RAISERROR (N'PUBLISH', 1, 1, 7, 3, N'abcde'); GO -- THIS EVENT WILL BE FILTERED BECAUSE MESSAGE = 'FILTER' RAISERROR (N'FILTER', 2, 1, 7, 3, N'abcde'); GO -- THIS ERROR WILL BE PUBLISHED RAISERROR (N'PUBLISH', 2, 1, 7, 3, N'abcde'); GO   -- STOPS LISTENING FOR THE EVENT ALTER EVENT SESSION test_session ON SERVER STATE = STOP GO   -- REMOVES THE EVENT SESSION FROM THE SERVER DROP EVENT SESSION test_session ON SERVER GO -- REMOVES THE EVENT SESSION FROM THE SERVER select CAST(event_data as XML) as event_data from sys.fn_xe_file_target_read_file ('c:\temp\data2*.xel','c:\temp\data2*.xem', null, null)   This last statement will output one event with the following data: <event name="error_reported" package="sqlserver" id="100" version="1" timestamp="2010-03-05T23:15:05.481Z">   <data name="error">     <value>50000</value>     <text />   </data>   <data name="severity">     <value>2</value>     <text />   </data>   <data name="state">     <value>1</value>     <text />   </data>   <data name="user_defined">     <value>true</value>     <text />   </data>   <data name="message">     <value>PUBLISH</value>     <text />   </data>   <action name="sql_text" package="sqlserver">     <value>-- THIS ERROR WILL BE PUBLISHED RAISERROR (N'PUBLISH', 2, 1, 7, 3, N'abcde'); </value>     <text />   </action> </event> If you see more events, check if you have deleted previous event files. If so, please run   -- Deletes previous event files EXEC SP_CONFIGURE GO EXEC SP_CONFIGURE 'xp_cmdshell', 1 GO RECONFIGURE GO XP_CMDSHELL 'del c:\temp\data*.xe*' GO   or delete them manually.   More Info on Events: Extended Event Events More Info on Targets: Extended Event Targets More Info on Sessions: Extended Event Sessions More Info on Actions: Extended Event Actions More Info on Predicates: Extended Event Predicates Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • What is an elegant way to install non-repository software in 12.04?

    - by Tomas
    Perhaps I missed something when Canonical removed the "Create launcher" option from the right click menu, because I've really been missing that little guy. For me, it was the preferred way to install software that comes not in a .deb, but in a tar.gz, for example. (Note: in that tar.gz I have a folder with the compiled files, I'm NOT compiling from source) I just downloaded the new Eclipse IDE and extracted the tar.gz to my /usr folder. Now, I'd like to add it to my desktop and dash so it can be started easily. Intuitively I would right click the desktop and create a launcher. After this I'd copy the .desktop to /usr/share/applications. However, creating a launcher is not possible. My question: How would you install an already compiled tar.gz that you have downloaded from the internet? Below are a few things I've seen, but these are all more time-consuming than the right click option. If you have any better ideas, please let me know. Thanks! Manual copy & create a .desktop file: manually Simply extract the archive to /usr. Create a new text file, adding something along the lines of the code block below: [Desktop Entry] Version=1.0 Type=Application Terminal=false Exec="/usr/local/eclipse42/eclipse" Name="Eclipse 4.2" Icon=/home/tomas/icons/eclipse.svg Rename this file to eclipse42.desktop and make it executable. Then copy this to /usr/share/applications. Manually copy & create a .desktop file: GUI fossfreedom has elaborated on this in How can I create launchers on my desktop? Basically it involves the command: gnome-desktop-item-edit --create-new ~/Desktop After creating the launcher, copy it to /usr/share/applications.

    Read the article

  • ArchBeat Link-o-Rama for 2012-03-29

    - by Bob Rhubart
    A surefire recipe for cloud failure | @DavidLinthicum www.infoworld.com "Foundational planning for the use of cloud computing is an architectural problem," says David Linthicum. "You need to consider the enterprise holistically, starting with the applications, data, services, and storage. Understand where it is and what it does." Validating an Oracle IDM Environment (including a Fusion Apps build out) | Brian Eidelman fusionsecurity.blogspot.com Brian Eidelman shows how to "validate an Oracle Identity Management build out containing OID, OVD, OIM, and OAM." Oracle Enterprise Manager Ops Center 12c Launch - Interactive Webcast and Live Chat www.oracle.com Thursday, April 12, 2012. 9 a.m. PT / 12 p.m. ET / 4 p.m. GMT. Learn how your enterprise cloud can achieve 10x improved performance and 12x operational agility. Includes demo session. Speakers: Steve Wilson (VP Systems Management, Oracle) John Fowler (Exec VP Systems, Oracle) Brad Cameron (VP Development, Oracle Fusion Middleware) Bill Nesheim (VP Oracle Solaris) Dennis Reno (VP Customer Portal Experience, Oracle) Mike Wookey (Chief Architect, Oracle Enterprise Manager Ops Center) Prasad Pai (Sr Director, Oracle Enterprise Manager Ops Center) 2012 Real World Performance Tour Dates |Performance Tuning | Performance Engineering www.ioug.org Coming to your town: a full day of real world database performance with Tom Kyte, Andrew Holdsworth, and Graham Wood. Rochester, NY - March 8 Los Angeles, CA - April 30 Orange County, CA - May 1 Redwood Shores, CA - May 3 Thought for the Day "At first sight, the idea of any rules or principles being superimposed on the creative mind seems more likely to hinder than to help, but this is quite untrue in practice. Disciplined thinking focuses inspiration rather than blinkers it." — G. L. Glegg

    Read the article

  • how to troubleshoot sql server issues

    - by joe
    i have an ASP .net application with sql server database, and i am wondering if you can give your ideas on how to troubleshoot the following issue: i can insert / update / delete from any table, but i have one page that uses transactions to insert into different tables. the c# code is correct and very simple, but it fails. i used the sql profiler to see how my app interacts with the DB, especially that the app is using stored procedures, i can catch the exec procedure statement and run it manually from SSMS and it works fine, but the same stored procedure fails from the application!!! which lead me to think that issue is coming from the user account and settings, i am no expert in sql server and wondering if anyone can explain how to verify the required settings for user account. thanks EDIT: in web.config here is how i reference my connection <connectionStrings> <add name="Conn" connectionString="Data Source=localhost;Initial Catalog=myDB;Persist Security Info=True;User ID=DbUser;Password=password1254_3" providerName="System.Data.SqlClient"> </connectionstring> EDIT: i will try to describe the process here: 1- i begin a transaction 2- i call a stored proc to insert (which succeeds) and return the scope identity ( that will be used in the next step) 3- i call another stored procedure to insert some more info + scope identity from step 2, which is a foreign key here 4- i get error, foreign key violation 5- transaction rolled back, now tables empty again... thanks

    Read the article

  • cannot get upstart to run user job

    - by dre
    I am trying to get upstart start a user job during the boot of my machine. I have my conky.conf upstart config file in my $HOME/.init directory. Wenn I run "start conky" I get this error: dre@dre-laptop:~$ start conky start: Rejected send message, 1 matched rules; type="method_call", sender=":1.76" (uid=1000 pid=2843 comm="start conky ") interface="com.ubuntu.Upstart0_6.Job" member="Start" error name="(unset)" requested_reply="0" destination="com.ubuntu.Upstart" (uid=0 pid=1 comm="/sbin/init") dre@dre-laptop:~$ I (think I) know that this error has to do with the d-bus system (authentification). I also read (http://upstart.ubuntu.com/cookbook/#id96) that ubuntu 12.10 already has the right configuration in the d-bus config file "/etc/dbus-1/system.d/Upstart.conf" to allow normal users to use upstart. dre@dre-laptop:~/.init$ cat conky.conf description "conky, a system monitor appled" start on lightdm stop on shutdown # Automatically restart process if crashed respawn # Essentially lets upstart know the process will detach itself to the background #expect fork # Start conky exec /usr/bin/conky So, who knows, what do I do wrong??? greetings Andre No one??? Please... give me your best shot.

    Read the article

< Previous Page | 188 189 190 191 192 193 194 195 196 197 198 199  | Next Page >