Search Results

Search found 29502 results on 1181 pages for 'line segment'.

Page 708/1181 | < Previous Page | 704 705 706 707 708 709 710 711 712 713 714 715  | Next Page >

  • unexpected EOF and end of document

    - by WASasquatch
    I have been fiddling with this code for a few days. Mind you I am a beginner. I just want to get my script to be able to download a remote file, and scan MineCraft plugins. I got the scan plugins to work, but I'm having two other issues. One, I can't get the mc_addplugin to work correctly, and I get a Unexpected EOF and unexpected end of document when running any other command besides mc_scanplugins or mc_start bash: -c: line 0: unexpected EOF while looking for matching `"' bash: -c: line 1: syntax error: unexpected end of file Help would be so much appreciated! Thanks in advance. #!/bin/bash # /etc/init.d/craftbukkit # version 0.9.1 2012-07-06 (YYYY-MM-DD) ### BEGIN INIT INFO # Provides: craftbukkit # Required-Start: $local_fs $remote_fs # Required-Stop: $local_fs $remote_fs # Should-Start: $network # Should-Stop: $network # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: Starts craftbukkit server # Description: Starts and controls the craftbukkit server ### END INIT INFO # SETTINGS SERVICE='craftbukkit-1.2.5-R1.0.jar' OPTIONS='nogui' USERNAME='smith' # LIST ALL THE WORLDS IN YOUR CRAFTBUKKIT SERVER FOLDER WORLDS[1]='world' WORLDS[2]='world_nether' WORLDS[3]='world_the_end' WORLDS[4]='flat_world' MCPATH='/var/www/servers/Foundation' PLUGINSPATH='/var/www/servers/Foundation/plugins' TEMPPLUGINS='/var/www/servers/Foundationplugins/temp_plugins' BACKUPPATH='/var/www/servers/Foundation/backup' CPU_COUNT=2 INVOCATION="java -Xmx2024M -Xms2024M -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalPacing -XX:ParallelGCThreads=$CPU_COUNT -XX:+AggressiveOpts -jar $SERVICE $OPTIONS" ME=`whoami` as_user() { if [ $ME == $USERNAME ] ; then bash -c "$1" else su - $USERNAME -c "$1" fi } mc_start() { if pgrep -u $USERNAME -f $SERVICE > /dev/null then echo "$SERVICE is already running!" else echo "Starting $SERVICE..." cd $MCPATH as_user "cd $MCPATH && screen -dmS craftbukkit $INVOCATION" sleep 7 if pgrep -u $USERNAME -f $SERVICE > /dev/null then echo "$SERVICE is now running." else echo "Error! Could not start $SERVICE!" fi fi } mc_saveoff() { if pgrep -u $USERNAME -f $SERVICE > /dev/null then echo "$SERVICE is running... suspending saves" as_user "screen -p 0 -S craftbukkit -X eval 'stuff \"say The server is preforming a backup. Server going to read-only mode. Do not build...\"\015'" as_user "screen -p 0 -S craftbukkit -X eval 'stuff \"save-off\"\015'" as_user "screen -p 0 -S craftbukkit -X eval 'stuff \"save-all\"\015'" sync sleep 10 else echo "$SERVICE is not running. Not suspending saves." fi } mc_save() { if pgrep -u $USERNAME -f $SERVICE > /dev/null then echo "$SERVICE is running... Saving worlds..." as_user "screen -p 0 -S craftbukkit -X eval 'stuff \"save-all\"\015'" sync sleep 10 echo "Save complete!" else echo "$SERVICE is not running. Cannot save!" fi } mc_saveon() { if pgrep -u $USERNAME -f $SERVICE > /dev/null then echo "$SERVICE is running... re-enabling saves" as_user "screen -p 0 -S craftbukkit -X eval 'stuff \"save-on\"\015'" as_user "screen -p 0 -S craftbukkit -X eval 'stuff \"say Server backup has completed. Server going to read-write mode. You can now continue building...\"\015'" else echo "$SERVICE is not running. Not resuming saves." fi } mc_stop() { if pgrep -u $USERNAME -f $SERVICE > /dev/null then echo "Stopping $SERVICE" as_user "screen -p 0 -S craftbukkit -X eval 'stuff \"say $SERVERNAME is shutting down in 30 seconds! Please stop what you are doing. Check back later, we'll be back!\"\015'" as_user "screen -p 0 -S craftbukkit -X eval 'stuff \"save-all\"\015'" sleep 30 as_user "screen -p 0 -S craftbukkit -X eval 'stuff \"stop\"\015'" sleep 7 else echo "$SERVICE was not running." fi if pgrep -u $USERNAME -f $SERVICE > /dev/null then echo "Error! $SERVICE could not be stopped." else echo "$SERVICE is stopped." fi } mc_update() { if pgrep -u $USERNAME -f $SERVICE > /dev/null then echo "$SERVICE is running! Will not start update." else MC_SERVER_URL=http://dl.bukkit.org/latest-rb/craftbukkit.jar as_user "cd $MCPATH && wget -q -O $MCPATH/craftbukkit_server.jar.update $MC_SERVER_URL" if [ -f $MCPATH/craftbukkit_server.jar.update ] then if `diff $MCPATH/$SERVICE $MCPATH/craftbukkit_server.jar.update >/dev/null` then echo "You are already running the latest version of $SERVICE. Update anyway? [Y/n]" select yn in "Yes" "No"; do case $yn in Yes ) as_user "mv $MCPATH/$SERVICE $MCPATH/${SERVICE}_old.jar" as_user "mv $MCPATH/craftbukkit_server.jar.update $MCPATH/$SERVICE" echo "$SERVICE updated successfully!"; break;; No ) echo "The update was not installed! Removing temporary files and exiting..." as_user "rm $MCPATH/craftbukkit_server.jar.update" exit;; esac done else as_user "mv $MCPATH/$SERVICE $MCPATH/${SERVICE}_old.jar" as_user "mv $MCPATH/craftbukkit_server.jar.update $MCPATH/$SERVICE" echo "$SERVICE updated successfully!" fi else echo "$SERVICE update could not be downloaded." fi fi } mc_addplugin() { if pgrep -u $USERNAME -f $SERVICE > /dev/null then echo "$SERVICE is running! Please stop the service before adding a plugin." else echo "Paste the URL to the .JAR Plugin..." read JARURL JARNAME=$(basename "$JARURL") if [ -d "$TEMPPLUGINS" ] then as_user "cd $PLUGINSPATH && wget -r -A.jar $JARURL -o temp_plugins/$JARNAME" else as_user "cd $PLUGINSPATH && mkdir $TEMPPLUGINS && wget -r -A.jar $JARURL -o temp_plugins/$JARNAME" fi if [ -f "$TMPDIR/$JARNAME" ] then if [ -f "$PLUGINSPATH/$JARNAME" ] then if `diff $PLUGINSPATH/$JARNAME $TMPDIR/$JARNAME >/dev/null` then echo "You are already running the latest version of $JARNAME." else NOW=`date "+%Y-%m-%d_%Hh%M"` echo "Are you sure you want to overwrite this plugin? [Y/n]" echo "Note: Your old plugin will be moved to the "$TEMPPLUGINS" folder with todays date." select yn in "Yes" "No"; do case $yn in Yes ) as_user "mv $PLUGINSPATH/$JARNAME $TEMPPLUGINS/${JARNAME}_${NOW} && mv $TEMPPLUGINS/$JARNAME $PLUGINSPATH/$JARNAME"; break;; No ) echo "The plugin has not been installed! Removing temporary plugin and exiting..." as_user "rm $TEMPPLUGINS/$JARNAME"; exit;; esac done echo "Would you like to start the $SERVICE now? [Y/n]" select yn in "Yes" "No"; do case $yn in Yes ) mc_start; break;; No ) "$SERVICE not running! To start the service run: /etc/init.d/craftbukkit start"; exit;; esac done fi else echo "Are you sure you want to add this new plugin? [Y/n]" select yn in "Yes" "No"; do case $yn in Yes ) as_user "mv $PLUGINSPATH/$JARNAME $TEMPPLUGINS/${JARNAME}_${NOW} && mv $TEMPPLUGINS/$JARNAME $PLUGINSPATH/$JARNAME"; break;; No ) echo "The plugin has not been installed! Removing temporary plugin and exiting..." as_user "rm $TEMPPLUGINS/$JARNAME"; exit;; esac done echo "Would you like to start the $SERVICE now? [Y/n]?" select yn in "Yes" "No"; do case $yn in Yes ) mc_start; break;; No ) "$SERVICE not running! To start the service run: /etc/init.d/craftbukkit start"; exit;; esac done fi else echo "Failed to download the plugin from the URL you specified!" exit; fi fi } mc_scanplugins() { if [ "$(ls -A $PLUGINSPATH)" ] then shopt -s nullglob PLUGINS=($PLUGINSPATH/*.jar) i=1 for f in "${PLUGINS[@]}" do echo "${i}: $f" PLUGIN[$i]=$f i=$(( $i + 1 )) done echo "Enter the ID of a plugin you want removed, or any other key to cancel." read INPUT if [ ! -z "${INPUT##*[!0-9]*}" ] then if [ -f "${PLUGIN[INPUT]}" ] then echo "Removing plugin..." JAR=$(basename ${PLUGIN[INPUT]}) JARNAME=${JAR%.jar} as_user "rm -f ${PLUGIN[INPUT]}" sleep 2 as_user "cd $PLUGINSPATH && rm -rf ./${JARNAME}" if [ -f "${PLUGINSPATH}/${JARNAME}" ] then echo "Plugin folder could not be removed..." fi echo "Plugin removed." else echo "${PLUGIN[INPUT]}" echo "Invalid plugin! Does not exist! Canceling..." exit; fi else echo "Canceling..." exit; fi else echo "You have no plugins installed." exit; fi } mc_backup() { mc_saveoff for i in "${WORLDS[@]}"; do NOW=`date "+%Y-%m-%d_%Hh%M"` BACKUP_FILE="$BACKUPPATH/${i}_${NOW}.tar" echo "Backing up world: $i..." #as_user "cd $MCPATH && cp -r $i $BACKUPPATH/${i}_`date "+%Y.%m.%d_%H.%M""` as_user "tar -C \"$MCPATH\" -cf \"$BACKUP_FILE\" $i" done echo "Backing up $SERVICE" as_user "tar -C \"$MCPATH\" -rf \"$BACKUP_FILE\" $SERVICE" #as_user "cp \"$MCPATH/$SERVICE\" \"$BACKUPPATH/craftbukkit_server_${NOW}.jar\"" mc_saveon echo "Compressing backup..." as_user "tar -cvzf $BACKUPPATH/server_backup_${NOW}.tar.gz $MCPATH" echo "Backup has completed successfully." } mc_command() { command="$1"; if pgrep -u $USERNAME -f $SERVICE > /dev/null then pre_log_len=`wc -l "$MCPATH/server.log" | awk '{print $1}'` echo "$SERVICE is running... executing command" as_user "screen -p 0 -S craftbukkit -X eval 'stuff \"$command\"\015'" sleep .1 # assumes that the command will run and print to the log file in less than .1 seconds # print output tail -n $[`wc -l "$MCPATH/server.log" | awk '{print $1}'`-$pre_log_len] "$MCPATH/server.log" fi } #Start-Stop here case "$1" in start) mc_start ;; stop) mc_stop ;; restart) mc_stop mc_start ;; save) mc_save ;; update) mc_stop mc_backup mc_update mc_start ;; scanplugins) mc_scanplugins ;; addplugin) mc_addplugin ;; backup) mc_backup ;; status) if pgrep -u $USERNAME -f $SERVICE > /dev/null then echo "$SERVICE is running." else echo "$SERVICE is not running." fi ;; command) if [ $# -gt 1 ]; then shift mc_command "$*" else echo "Must specify server command (try 'help'?)" fi ;; *) echo "Usage: $0 {start|stop|update|backup|status|restart|command \"server command\"}" exit 1 ;; esac exit 0

    Read the article

  • "VLC could not read the file" error when trying to play DVDs

    - by stephenmurdoch
    I can watch most DVD's on my machine using VLC but today, I went to watch Thor, and it won't play. libdvdread4 and libdvdcss2 are at the latest versions. vlc -v returns 1.1.4 w32codecs are installed and reinstalled ubuntu-restricted-extras are same as above My machine recognises the disc and I can open the folder and browse the assorted .vob files, of which there are many. None of them will open in VLC, or in MPlayer etc. When I run vlc -vvv /media/THOR/VIDEO_TS/VTS_03_1.VOB I get: File Reading Failed VLC could not read the file I also see command line output like this: [0x963f47c] main filter debug: removing module "swscale" [0x963a4b4] main generic debug: A filter to adapt decoder to display is needed [0x964be84] main filter debug: looking for video filter2 module: 18 candidates [0x964be84] swscale filter debug: 720x576 chroma: I420 -> 979x551 chroma: RV32 with scaling using Bicubic (good quality) [0x964be84] main filter debug: using video filter2 module "swscale" ..... [0x959f4e4] main video output warning: late picture skipped (-10038 > -15327) [0x963a4b4] main generic debug: auto hidding mouse [0x93ca094] main input warning: clock gap, unexpected stream discontinuity [0x93ca094] main input warning: feeding synchro with a new reference point trying to recover from clock gap [0x959f4e4] main video output warning: early picture skipped ...... ac-tex damaged at 0 12 ac-tex damaged at 6 20 ac-tex damaged at 12 28 This happens with onboard and Known Good USB DVD player I don't have standalone DVD player to try with TV I am going to watch another film instead for now, because I can do that. I just can't watch THOR, and I'm pretty confident that the disc is ok. It is a rental, but it's clean and there are no surface abrasions. I even cleaned it with Christian Dior aftershave to make sure.

    Read the article

  • Configure Dual Boot, Windows 7 and Ubuntu 12.04 with or without EFI

    - by Keroak
    I have just installed Ubuntu 12.04 on a laptop with Windows 7 but I don't get to boot from Ubuntu. First, during the installation I made these partitions (may be too many): /dev/sda1 FAT32 SYSTEM 200Mb boot (EFI boot, i guess) /dev/sda2 unknown file system 128 Mb msftres (Windows Boot Manager) /dev/sda3 NTFS OS 100 Gb (Windows 7) /dev/sda4 NTFS DATOS 315 Gb (Data partition) /dev/sda5 ext4 28 Gb (/home) /dev/sda8 unknown file system 1 Gb biog_grub (i'm not very sure why i made this one) /dev/sda6 ext4 17 Gb (/ Ubuntu 12.03 installed withou errors aparently) /dev/sda7 linex-swap 2 GB (swap) I can boot from Windows perfectly. Actually I tried to configure Windows Boot Manager with EasyBCD but it doesn't recognize any boot entry. Anyway, I added an Ubuntu Entry and it configured it automatically. Now I have boot entries the Windows 7 one that appear to work and the Ubuntu 12.04 that it prompt a "No application found" message. I re-started from a USB with Ubuntu and tried to fix GRUB from the command-line and with boot-repair. No results. As far as I understand I have to tell the Windows Boot Manager where my Ubuntu boot loader is. So I have two problems: Actually, I don't know where my Ubuntu boot loader, GRUB or GRUB2 or whatever, is. I don't know how to set my Ubuntu entry in Windows Boot Manager. I guess using BCDedit.exe as EasyBCD didn't show me the entries. Anyway, I don't know what parameters to use. I read several articles about it but i didn't find out anything useful.

    Read the article

  • Metacity malfunction preventing custom Gnome session from launching?

    - by QuietThud
    When I try to run Metacity in Ubuntu2D(12.04), I get the following message: alisa@ubuntu:~$ metacity Window manager warning: Screen 0 on display ":2.0" already has a window manager; try using the --replace option to replace the current window manager. I get the same message when running Compiz from the command line in 3D (it opens fine through the GUI (same thing for AWN)). I understand that these should be the default managers for the respective sessions. I'm trying to create a custom Gnome session using the following instructions: unity launcher-free session. Here is what I've put into my .session file: [GNOME Session] Name=Custom Unity2D Session RequiredComponents=gnome-settings-daemon; RequiredProviders=windowmanager;panel; DefaultProvider-windowmanager=metacity DefaultProvider-panel=unity-2d-panel FallbackSession=ubuntu-2d DesktopName=GNOME Since I'm having problems identifying my default, and the code refers to Metacity, I figured this may be relevant to my inability to load the custom session (it shows up on my login screen, but won't launch). I tried specifying Metacity as my default manager by adding exec metacity to the .xinitrc file, and I tried running metacity --replace, but neither worked. How do I determine my current default window manager, what should the default be, and how do I re-assign it? Also, please let me know if you think there may be other issues affecting my custom session. I am new to Linux, so list anything you think might be helpful. Thank you!

    Read the article

  • Programming by dictation?

    - by Andrew M
    ie. you speak out the code, and someone else across the room types it in Anyone tried this? Obviously the person taking the dictation would need to be a coder too, so you didn't have to explain everything and go into tedious detail (not 'open bracket, new line...' but more like 'create a new class called myParser that takes three arguments, first one is...'). I thought of it because sometimes I'm too easily distracted at my computer. Surrounded by buttons, instant gratification a click away, the world at my fingertips. To get stuff done, I want to get away, write my code on paper. But that would mean losing access to necessary resources, and necessitate tedious typing-up later on. The solution? Dictate. Pros: no chance to check reddit, stackexchange, gmail, etc. code while you pace the room, lie down, play billiards, whatever train your brain to think more abstractedly (have to visualize things if you can't just see the screen) skip the tedious details (closing brackets etc.) the typist gets to shadow a more experienced programmer and learn how they work the typist can provide assistance/suggestions external pressure of typist expecting instructions, urging you to stay focussed Cons might be too hard might not work any better rather inefficient use of assisting programmer need to find/pay someone to do this

    Read the article

  • How do I properly use String literals for loading content?

    - by Dave Voyles
    I've been using verbatim string literals for some time now, and never quite thought about using a regular string literal until I started to come across them in Microsoft provided XNA samples. With that said, I'm trying to implement a new AudioManager class from the Net Rumble sample. I have two (2) issues here: Question 1: In my code for my GameplayScreen screen I have a folder location written as the following, and it works fine: menuButton = content.Load<SoundEffect>(@"sfx/menuButton"); menuClose = content.Load<SoundEffect>(@"sfx/menuClose"); If you notice, you'll see that I'm using a verbatim string, with a forward slash "/". In the AudioManager class, they use a regular string literal, but with two backslashes "\". I understand that one is used as an escape, but why are they BACK instead of FORWARD? (See below) soundList[soundName] = game.Content.Load<SoundEffect>("audio\\wav\\"+ soundName); Question 2: I seem to be doing everything correctly in my AudioManager class, but I'm not sure of what this line means: audioFileList = audioFolder.GetFiles("*.xnb"); I suppose that the *xnb means look for everything BUT files that end in *xnb? I can't figure out what I'm doing wrong with my file locations, as the sound effects are not playing. My code is not much different from what I've linked to above. private AudioManager(Game game, DirectoryInfo audioDirectory) : base(game) { try { audioFolder = audioDirectory; audioFileList = audioFolder.GetFiles("*.mp3"); soundList = new Dictionary<string, SoundEffect>(); for (int i = 0; i < audioFileList.Length; i++) { string soundName = Path.GetFileNameWithoutExtension(audioFileList[i].Name); soundList[soundName] = game.Content.Load<SoundEffect>(@"sfx\" + soundName); soundList[soundName].Name = soundName; } // Plays this track while the GameplayScreen is active soundtrack = game.Content.Load<Song>("boomer"); } catch (NoAudioHardwareException) { // silently fall back to silence } }

    Read the article

  • xorg, nvidia, log-in all hosed - how can I completely reset graphics set-up/settings?

    - by Fred Hamilton
    I just did a fresh install of Mythbuntu 12.04.1 on my Intel MB with nVidia 9500GT graphics card. Hardware's been working great with 10.10 for about 2 years. Background: (optional - feel free to skip to question) I was trying to get my component video output to generate 720p, messing around with the nvidia drivers, and now the entire display system is hosed. I can SSH in and get a terminal. Depending on which nvidia package I install/remove, I get: Garbage on screen (after I "apt-get remove nvidia*") A low-res graphical log-in screen where I can log in as fred or guest. If I log in as fred, it displays some text mode status line then goes right back to the log-in screen. If I log in as guest, I actually get the full Ubuntu desktop, but I need to be able to log in as fred. Other times I get an error: "API mismatch: the NVIDIA kernel module has version 304.43, but this NVIDIA driver component has version 295.49." I've googled around, including trying this thread with the same error message, but to no effect. Question: How can I just reset x settings, drivers, everything display-related to the exact same way it was after a fresh install?

    Read the article

  • basic beginning emacs questions - install latest version and pick appropriate UI

    - by MountainX
    I'm running the latest Kubuntu (12.04 beta 2) and I would like to run the latest emacs (currently v24). The repos are one version behind. What's the best way to install v24 or later (and avoid future version conflicts)? Also, is there any reason not to aways use the GUI version of emacs if X is running? For example, could I set the GUI emacs version as the default text editor and use it to edit cron jobs (crontab -e)? I'm assuming the answer is yes, but since I haven't done that yet (my default editor is nano), I want to check if there are reasons I should leave nano as the default editor. Usually when I'm working on the command line I end up using nano. Now that I think about it, I have no idea why I keep doing that. Is there any downside to calling a GUI editor when working in an X terminal? EDIT: I briefly tested these two versions GNU Emacs 24.0.94.1 (x86_64-pc-linux-gnu, GTK+ Version 3.3.20) from GNU Emacs 23.3.1 (x86_64-pc-linux-gnu) installed by default in Kubuntu. This post explains some of the differences between versions. Unfortunately (for me) the defaults installed version (23.3.1, 23.3+1-1ubuntu9) is the nox version. Package: emacs23-nox Status: install ok installed Version: 23.3+1-1ubuntu9 Replaces: emacs23, emacs23-gtk, emacs23-lucid The package with version 24 opens in GUI mode by default. That's what I prefer. Some of the version 24 changes that interest me are listed in the references below. But there appear to be a multitude of different packages and versions I could install. References: What’s New In Emacs 24 (part 1) | Mastering Emacs http://www.masteringemacs.org/articles/2011/12/06/what-is-new-in-emacs-24-part-1/ " shell-mode uses pcomplete rules, with the standard completion UI. Yowzah! There’s a lot of cool, new functionality hidden away in this gem of a change." EmacsWiki: Recent Changes http://www.emacswiki.org/emacs/?action=rc;showedit=0

    Read the article

  • Oracle is Proud Sponsor of Gartner Security and Risk Management Summit 2011

    - by Troy Kitch
    Oracle will have a very strong presence at this year’s Gartner Security and Risk Management Summit 2011 in Washington D.C., June 20-23. If you plan on being there, please be sure to stop by Oracle booth D and say “hi” to the Security Solution Experts. Please join us for the: Oracle Solution Provider Session Oracle Solution Showcase Receptions Oracle Face to Face Meetings We have some powerful database security demonstrations that we’re showing off. If you haven’t had an opportunity to check out the new Oracle Database Firewall, now’s your chance to learn why it’s the first line of defense in a database security defense in depth strategy. Additionally, Mark Morrison, director of intelligence community information assurance, and Pat Sack, VP of the Oracle national security group, will discuss U.S. government cross-domain secure information sharing. This case study session will explain how Oracle helped the U.S. government consolidate its mission-critical intelligence database infrastructure securely, and the underlying Oracle Database security solutions that can benefit any organization looking to increase business agility and drive down IT costs through database consolidation. Potomac Ballroom B Find out more about the event here. Twitter #GartnerSecurity to join the conversation.

    Read the article

  • Financial Statement Presentation Changes

    - by Theresa Hickman
    On March 10, 2010, FASB and IASB came to an agreement on financial statement presentation. They have been discussing changes for some time, such as displaying more lines items and moving certain line items from equity to the P&L, and now it seems they have finally come to a joint decision and put it in writing. I recently learned that there will be a trend to book nothing to equity and to convey everything in the P&L to take away any facility for companies to hide losses from its shareholders, investors, etc. (I'm exaggerating when I say book nothing to equity. Obviously, those items that already live there, such as stocks and dividends, would stay there.) But accounts like your CTA (Cumulative Translation Adjustment account) used to plug the gain or loss from equity translation would move from the equity section to your expense section. The rationale is that when you run translation, you're doing so for a subsidiary that you own, or simply put, it's a foriegn investment. Thus, the gains/losses of that foriegn investment should be itemized on your P&L and not buried in equity. The FASB will include changes in financial statement presentation in its Exposure Draft that is planned for issuance at the end of April 2010. Companies will be required to adopt the financial statement presentation provisions retroactively. Yes, that means companies will need to apply these changes to previously issued financial statements. The FASB Summary of Board Decisions can be found at "fasb.org".

    Read the article

  • New cloud development workflow using Github, Cloud9ide and CloudFoundry.

    - by weng
    So time is changing towards cloud development/computing. I'm trying to get the new "cloud" workflow based on the services I'm going to use: Github, Cloud9ide and CloudFoundry. Here is what is on my mind: Github acts like a central (main repo) just like yesterday's local filesystem. Every service will base it service upon this main repo. Workflow: Github: I create a new Github repo served as main repo for the project. Cloud9ide. I open my Github repo and write my tests and implementation (BDD/TDD). When I'm ready I save (commit) it to main repo on Github. X: A running instance of Jenkins detects someone has committed and fetches the latest commit, builds, deploys, tests (yeti and/or selenium) and reports if the tests were passed or not. If not, I make another commit til all tests are passing. X: I run the CloudFoundry commands to push the main Github repo to CloudFoundry's server and it will deploy my app automatically. What I'm still confused about is where this X environment will be. On a local server where I have to install Jenkins? Or could I install it on Cloud9ide (when java is supported) or will it be on another cloud service? Also, that X environment has to be able to fetch (clone) the Github repo and run the build scripts. And since the concept of Cloud9ide is very new and there haven't been any other predecessors I really wonder how the workflow will look like. We all know Github's workflow. We now know CloudFoundry's workflow (deploy/scale with a restful API/command line tool). But how Cloud9Ide will operate is still somewhat unclear to me. Someone on Cloud9ide mentioned that there will be buttons like deploy so I can deploy with one click. But that I guess will depend on what services that deploy process will hook up into etc. Could someone enlighten this cloud workflow topic and fill in the gaps. Thanks.

    Read the article

  • Ubuntu 12.04 boot hangs with a black screen before grub menu after upgrade (gma500_gfx driver)

    - by Eric van der Vlist
    I am using Ubuntu on a fit-pc2 specifications and after upgrading from 10.04 to 12.04 I get a black screen at boot time (before displaying the grub menu) and the computer hangs with no disk activity. I have managed to boot Ubuntu 12.04 on a live USB key but had to add the following boot options to do so: console=tty1 or console=text acpi=off noapic nomodeset Using boot-repair, I have tried to add these options to /etc/default/grub (see this pastie log for instance) but I haven't been able to fix the black screen issue. I have tried many other things such as the workarounds mentioned on the web for PSB-GFX_drivers without any success and also to uncomment GRUB_TERMINAL=console with the result of getting a No video mode activated error. During these tests, I have managed to break /boot/grub/grub.cfg and could then hit grub in command line. This gave me the chance to check that I can boot without problem if I type: grub> set root=(hd0,1) grub> linux /vmlinuz root=/dev/sda1 ro acpi=off noapic nomodeset console=tty1 grub> initrd /initrd.img grub> boot How can I tell grub to use these options?

    Read the article

  • Program error trying to generate Outlook 2013 email from Visual Basic 2010 [on hold]

    - by Dewayne Pinion
    I am using vb to send emails through outlook. Currently we have a mix of outlook versions at our office: 2010 and 2013 with a mix of 32 bit and 64 bit (a mess, I know). The code I have works well for Outlook 2010: Private Sub btnEmail_Click(sender As System.Object, e As System.EventArgs) Handles btnEmail.Click CreateMailItem() End Sub Private Sub CreateMailItem() Dim application As New Application Dim mailItem As Microsoft.Office.Interop.Outlook.MailItem = CType(application.CreateItem( _ Microsoft.Office.Interop.Outlook.OlItemType.olMailItem), Microsoft.Office.Interop.Outlook.MailItem) 'Me.a(Microsoft.Office.Interop.Outlook.OlItemType.olMailItem) mailItem.Subject = "This is the subject" mailItem.To = "[email protected]" mailItem.Body = "This is the message." mailItem.Importance = Microsoft.Office.Interop.Outlook.OlImportance.olImportanceLow mailItem.Display(True) End Sub However, I cannot get this to work for 2013. I have referenced the version 15 dll for 2013 and it seems to be backward compatible, but when I try to use the above code for 2013 (it is 64 bit) it says it cannot start Microsoft Outlook. A program error has occured. This is happening on the application Dim statement line. I have tried googling around but there doesn't seem to be much out there referencing 2013 but I feel that the problem here probably has more to do with 64 bit than the software version. Thank you for any suggestions!

    Read the article

  • VirtualBox : increase hard disk size of the virtual machine

    - by wim
    I have run out of space on my WinXP virtual machine, which I only gave 10 GB space for when I created it. Is there an easy way to increase it to, say, 20 GB? I can't see any obvious option in VirtualBox settings. edit: the suggestion below gives this error wim@wim-ubuntu:/media/data/winxp_vm$ VBoxManage modifyhd wim.vdi --resize 20000 VBoxManage: error: Cannot register the hard disk '/media/data/winxp_vm/wim.vdi' {46284957-2c09-4e70-8a49-bfbe0f7f681d} because a hard disk '/home/wim/VirtualBox VMs/winxp_vm/wim.vdi' with UUID {46284957-2c09-4e70-8a49-bfbe0f7f681d} already exists VBoxManage: error: Details: code NS_ERROR_INVALID_ARG (0x80070057), component VirtualBox, interface IVirtualBox, callee nsISupports Context: "OpenMedium(Bstr(pszFilenameOrUuid).raw(), enmDevType, AccessMode_ReadWrite, fForceNewUuidOnOpen, pMedium.asOutParam())" at line 210 of file VBoxManageDisk.cpp edit2: removing the .vdi from VirtualBox before calling VBoxManage command, then adding it back in, was successful. But now I can't boot the virtual machine, I get this worrying screen: By the way, it says FATAL: Could not read from the boot medium! System halted. edit3: The vdi must be reattached to the VM after VBoxManage command. Further, the partition will need to be resized from WITHIN windows, because you will have this empty space: I was able to resize the partition easily using a bit of freeware called EASEUS Partition Master 9.1.0 Home Edition.

    Read the article

  • Ogre3d particle effect causing error in iPhone

    - by anu
    1) First I have added the Particle Folder from the OgreSDK( Contains Smoke.particle) 2) Added the Smoke.material And smoke.png and smokecolors.ong 3) After this I added the Plugin = Plugin_ParticleFX in the plugins.cfg Here is my code: #Defines plugins to load # Define plugin folder PluginFolder=./ # Define plugins Plugin=RenderSystem_GL Plugin=Plugin_ParticleFX 4) I have added the particle path in the resources.cfg( adding the particle file in this get crash ) #Resource locations to be added to the 'bootstrap' path # This also contains the minimum you need to use the Ogre example framework [Bootstrap] Zip=media/packs/SdkTrays.zip # Resource locations to be added to the default path [General] FileSystem=media/models FileSystem=media/particle FileSystem=media/materials/scripts FileSystem=media/materials/textures FileSystem=media/RTShaderLib FileSystem=media/RTShaderLib/materials Zip=media/packs/cubemap.zip Zip=media/packs/cubemapsJS.zip Zip=media/packs/skybox.zip 6) Finally I did all the settings, my code is here: mPivotNode = OgreFramework::getSingletonPtr()->m_pSceneMgr->getRootSceneNode()->createChildSceneNode(); // create a pivot node // create a child node and attach an ogre head and some smoke to it Ogre::SceneNode* headNode = mPivotNode->createChildSceneNode(Ogre::Vector3(100, 0, 0)); headNode->attachObject(OgreFramework::getSingletonPtr()->m_pSceneMgr->createEntity("Head", "ogrehead.mesh")); headNode->attachObject(OgreFramework::getSingletonPtr()->m_pSceneMgr->createParticleSystem("Smoke", "Examples/Smoke")); 7) I run this, I got the below error: An exception has occurred: OGRE EXCEPTION(2:InvalidParametersException): Cannot find requested emitter type. in ParticleSystemManager::_createEmitter at /Users/davidrogers/Documents/Ogre/ogre-v1-7/OgreMain/src/OgreParticleSystemManager.cpp (line 353) 8) Getting crash at: (void)renderOneFrame:(id)sender { if(!OgreFramework::getSingletonPtr()->isOgreToBeShutDown() && Ogre::Root::getSingletonPtr() && Ogre::Root::getSingleton().isInitialised()) { if(OgreFramework::getSingletonPtr()->m_pRenderWnd->isActive()) { mStartTime = OgreFramework::getSingletonPtr()->m_pTimer->getMillisecondsCPU(); //( getting crash here) Does anyone know what could be causing this?

    Read the article

  • Getting mybrother MFC-J825DW working as a network scanner

    - by AntonChanning
    I've been attempting to set up my new brother multi-function device to work as a printer and scanner using the following steps. It is connected to the network as a LAN device, not directly connected to my ubuntu machine. Downloaded the lpr driver and cupswrapper driver from Brother support. (Select the deb packages, not the rpms). Followed the instructions to install the lpr driver. Followed the instructions to install the cupswrapper driver. After this point I was able to successfully perform a test print, so the printer part is working. So far I haven't had much luck getting the scanner working. This is what I've tried: Downloaded the brscan4 and scan-key-tool deb packages from brother support. Followed the instructions for installing the scanner driver for network. Followed the instructions for installing scan-key-tool. However when I tried to scan it detects no scanner. I then tried the solution offered in this answer to a question based on a similar brother printer, but no luck. I must have made a mistake somewhere along the line. Does anyone have any ideas what I can try to find out what? Or should I uninstall everything and start again from the beginning?

    Read the article

  • How atomic is a SELECT INTO?

    - by leo.pasta
    Last week I got an interesting situation that prompted me to challenge a long standing assumption. I always thought that a SELECT INTO was an atomic statement, i.e. it would either complete successfully or the table would not be created. So I got very surprised when, after a “select into” query was chosen as a deadlock victim, the next execution (as the app would handle the deadlock and retry) would fail with: Msg 2714, Level 16, State 6, Line 1 There is already an object named '#test' in the database. The only hypothesis we could come up was that the “create table” part of the statement was committed independently from the actual “insert”. We can confirm that by capturing the “Transaction Log” event on Profiler (filtering by SPID0). The result is that when we run: SELECT * INTO #results FROM master.sys.objects we get the following output on Profiler: It is easy to see the two independent transactions. Although this behaviour was a surprise to me, it is very easy to workaround it if you feel the need (as we did in this case). You can either change it into independent “CREATE TABLE / INSERT SELECT” or you can enclose the SELECT INTO in an explicit transaction: SET XACT_ABORT ON BEGIN TRANSACTION SELECT * INTO #results FROM master.sys.objects COMMIT

    Read the article

  • C# Adds Optional and Named Arguments

    Earlier this month Microsoft released Visual Studio 2010, the .NET Framework 4.0 (which includes ASP.NET 4.0), and new versions of their core programming languages: C# 4.0 and Visual Basic 10. In designing the latest versions of C# and VB, Microsoft has worked to bring the two languages into closer parity. Certain features available in C# were missing in VB, and vice-a-versa. Last week I wrote about Visual Basic 2010's language enhancements, which include implicit line continuation, auto-implemented properties, and collection initializers - three useful features that were available in previous versions of C#. Similarly, C# 4.0 introduces new features to the C# programming language that were available in earlier versions of Visual Basic, namely optional arguments and named arguments. Optional arguments allow developers to specify default values for one or more arguments to a method. When calling such a method, these optional arguments may be omitted, in which case their default value is used. In a nutshell, optional arguments allow for a more terse syntax for method overloading. Named arguments, on the other hand, improve readability by allowing developers to indicate the name of an argument (along with its value) when calling a method. This article examines how to use optional arguments and named arguments in C# 4.0. Read on to learn more! Read More >Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • C# Adds Optional and Named Arguments

    Earlier this month Microsoft released Visual Studio 2010, the .NET Framework 4.0 (which includes ASP.NET 4.0), and new versions of their core programming languages: C# 4.0 and Visual Basic 10. In designing the latest versions of C# and VB, Microsoft has worked to bring the two languages into closer parity. Certain features available in C# were missing in VB, and vice-a-versa. Last week I wrote about Visual Basic 2010's language enhancements, which include implicit line continuation, auto-implemented properties, and collection initializers - three useful features that were available in previous versions of C#. Similarly, C# 4.0 introduces new features to the C# programming language that were available in earlier versions of Visual Basic, namely optional arguments and named arguments. Optional arguments allow developers to specify default values for one or more arguments to a method. When calling such a method, these optional arguments may be omitted, in which case their default value is used. In a nutshell, optional arguments allow for a more terse syntax for method overloading. Named arguments, on the other hand, improve readability by allowing developers to indicate the name of an argument (along with its value) when calling a method. This article examines how to use optional arguments and named arguments in C# 4.0. Read on to learn more! Read More >

    Read the article

  • Hybrid USB Install Method - netboot and iso

    - by Samus Arin
    I was following the steps here ("Preparing Files for USB Memory Stick Booting") https://help.ubuntu.com/10.04/installation-guide/i386/boot-usb-files.html to create a installation usb drive for 12.1. The very first paragraph of the article states "The second is to also copy a CD image onto the USB stick and use that as a source for packages, possibly in combination with a mirror." However, the only instructions mentioned regarding an iso image is to simply copy one somewhere on the drive (after its been made bootable and syslinux, vmlinuz and initrd.gz installed/copied): "you should now copy an Ubuntu ISO image onto the stick." I thought it strange there where no configuration steps for "pointing" the kernel to the iso (like a line in syslinux.cfg or a boot: option or something), but went ahead with the install anyway. I don't think the iso was used at all, it appeared that all the OS files where downloaded during the install process. Therefore, I was wondering if anyone knew how to use this local iso image in this particular installation technique (I know the image can be installed with dd, but thats a different technique), b/c I need to reinstall (I installed unity, but it's wayy to much for my little Atom based netbook) ? Thank you.

    Read the article

  • .NET Demon 1.0 Released

    - by theo.spears
    Today we're officially releasing version 1.0 of .NET Demon, the Visual Studio Extension Alex Davies and I have been working on for the last 6 months. There have been beta versions available for a while, but we have now released the first "official" version and made it available to purchase. If you haven't yet tried the tool, it's all about reducing the time between when you write a line of code and when you are able to try it out so you don't have to wait: Continuous compilation We use spare CPU cycles on your machine to compile your code in the background when you make changes, so assemblies are up to date whenever you want to run them. Some clever logic means we only recompile code which may have been affected by your changes. Continuous save .NET Demon can perform background saving, so you don't lose any work in case of crashes or power failures, and are less likely to forget to commit changed files. Continuous testing (Experimental) The testing tool in .NET Demon watches which code you change in your solution, and automatically reruns tests which are impacted, so you learn about any breaking changes as quickly as possible. It also gives you inline test coverage information inside Visual Studio. Continuous testing is still experimental - it will work fine in many cases, but we know it's not yet perfect. Releasing version 1.0 doesn't mean we're pausing development or pushing out improvements. We will still be regularly providing new versions with improved functionality and fixes for any bugs people come across. Visit the .NET Demon product page to download

    Read the article

  • Configuring LiveID authentication with SharePoint2010

    - by ybbest
    With the addition of the new claims based authentication framework in SharePoint 2010, SharePoint is now more loosely coupled to the authentication layer than ever. You’ve probably seen presentations or webinars where it was mentioned that you can use claims authentication against authentication providers such as Live ID and OpenID. In this blog I will show you the common problems while you configure you LiveID integration with SharePoint2010.The detailed configuration can be found in the following blogs. Part 1 – http://www.wictorwilen.se/Post/Visual-guide-to-Windows-Live-ID-authentication-with-SharePoint-2010-part-1.aspx Part 2 – http://www.wictorwilen.se/Post/Visual-guide-to-Windows-Live-ID-authentication-with-SharePoint-2010-part-2.aspx Part 3 – http://www.wictorwilen.se/Post/Visual-guide-to-Windows-Live-ID-authentication-with-SharePoint-2010-part-3.aspx Here are some problems I have following the instructions: Problem 1: If you had the following exceptions when you run the PowerShell scripts to create the new LiveID authentication provider New-SPTrustedIdentityTokenIssuer : Exception of type ‘System.ArgumentException’ was thrown.Parameter name: claimType At line:1 char:42 + $authp = New-SPTrustedIdentityTokenIssuer <<<< -Name “LiveID INT” -Description “LiveID INT” -Realm $realm -ImportTrustCertificate $certfile -ClaimsMappings $emailclaim,$upnclaim -SignInUrl “https://login.live-int.com/login.srf” -IdentifierClaim $emailclaim.InputClaimType + CategoryInfo : InvalidData:(Microsoft.Share…dentityProvider:SPCmdletNewSPIdentityProvider) [New-SPTrustedIdentityTokenIssuer], ArgumentException + FullyQualifiedErrorId :Microsoft.SharePoint.PowerShell.SPCmdletNewSPIdentityProvider Solution: You need to Remove the existing the SPTrustedIdentityTokenIssuer.     1. You need to first get the existing TokenIssuer name by Get-SPTrustedIdentityTokenIssuer, and then run Remove- SPTrustedIdentityTokenIssuer to remove the existing TokenIssuer.     2. After that , you can re-run the script , everything should work fine now. Problem 2: Live INT automatically logs out Whenever I try to log in (https://login.live-int.com/login.srf), after entering valid email/password I get redirected to the logout page. Solution: You can find the solution in my previous blog.

    Read the article

  • Oracle HCM User Group (OHUG) 2012 Conference

    - by Maria Ana Santiago
    The PeopleSoft HCM team is looking forward to a great OHUG conference and to meeting with our PeopleSoft HCM Customers there! The OHUG Global Conference 2012 will be held at the Mirage in Las Vegas, Nevada, June 18-22, 2012. With Oracle Corporation's continued support of the Global OHUG Conference, this event is one of the best opportunities PeopleSoft HCM Customers have to interact and communicate directly with PeopleSoft Strategy, Development and Support and understand the entire Oracle HCM opportunities that await. PeopleSoft HCM has 10 exciting sessions and several Meet the Experts sessions planned to highlight the value and opportunities with PeopleSoft applications. For details on the PeopleSoft HCM tracks and sessions please visit the OHUG Session Line Up page. PeopleSoft HCM will be offering an annual General Roadmap session by Tracy Martin and multiple Product specific sessions. Our PeopleSoft HCM General session will provide very valuable information on our continuous delivery strategy and upcoming HCM 9.2 release and beyond. Tracy will also address opportunities that await PeopleSoft customers with co-exist opportunities with Fusion, Taleo, Oracle BI and more. Our Product Roadmap sessions will go into product specific areas providing roadmap information for the corresponding product domains. There will also be a PeopleTools Roadmap and Vision session that will let Customers see what is new in PeopleTools and what is planned for the future. And last, but not least, PeopleSoft will be holding the annual Meet the Experts sessions. Customers who want to have focused discussions on specific areas or products can meet with PeopleSoft Strategy, Development and Support teams who will be available to discuss product features and answer Customers' questions. Don’t miss this opportunity! If you are a PeopleSoft HCM Customer, join us at OHUG! Look forward to seeing you there.

    Read the article

  • Why do some programmers think there is a contrast between theory and practice?

    - by Giorgio
    Comparing software engineering with civil engineering, I was surprised to observe a different way of thinking: any civil engineer knows that if you want to build a small hut in the garden you can just get the materials and go build it whereas if you want to build a 10-storey house you need to do quite some maths to be sure that it won't fall apart. In contrast, speaking with some programmers or reading blogs or forums I often find a wide-spread opinion that can be formulated more or less as follows: theory and formal methods are for mathematicians / scientists while programming is more about getting things done. What is normally implied here is that programming is something very practical and that even though formal methods, mathematics, algorithm theory, clean / coherent programming languages, etc, may be interesting topics, they are often not needed if all one wants is to get things done. According to my experience, I would say that while you do not need much theory to put together a 100-line script (the hut), in order to develop a complex application (the 10-storey building) you need a structured design, well-defined methods, a good programming language, good text books where you can look up algorithms, etc. So IMO (the right amount of) theory is one of the tools for getting things done. So my question is why do some programmers think that there is a contrast between theory (formal methods) and practice (getting things done)? Is software engineering (building software) perceived by many as easy compared to, say, civil engineering (building houses)? Or are these two disciplines really different (apart from mission-critical software, software failure is much more acceptable than building failure)?

    Read the article

  • PowerShell One Liner: Duplicating a folder structure in a Sharepoint document library

    - by Darren Gosbell
    I was asked by someone at work the other day, if it was possible in Sharepoint to create a set of top level folders in one document library based on the set of folders in another library. One document library has a set of top level folders that is basically a client list and we needed to create the same top level folders in another library. I knew that it was possible to open a Sharepoint document library in explorer using a UNC style path and that you could map a drive using a technique like this one: http://www.endusersharepoint.com/2007/11/16/can-i-map-a-document-library-as-a-mapped-drive/. But while explorer would let us copy the folders, it would also take all of the folder contents too, which was not what we wanted. So I figured that some sort of PowerShell script was probably the way to go and it turned out to be even easier than I thought. The following script did it in one line, so I thought I would post it here in my "online memory". :) dir "\\sharepoint\client documents" | where {$_.PSIsContainer} | % {mkdir "\\sharepoint\admin documents\$($_.Name)"} I use "dir" to get a listing from the source folder, pipe it through "where" to get only objects that are folders and then do a foreach (using the % alias) and call "mkdir".

    Read the article

< Previous Page | 704 705 706 707 708 709 710 711 712 713 714 715  | Next Page >