Search Results

Search found 48823 results on 1953 pages for 'run loop'.

Page 458/1953 | < Previous Page | 454 455 456 457 458 459 460 461 462 463 464 465  | Next Page >

  • KDE not loading without nomode tag in grub and bad resolution [migrated]

    - by fcole90
    I recently installed Linux Mint 13 KDE but it's not working fine. At first I had to use failsafe mode to boot because normal boot takes to a textual login. If I use normal boot and text login I'm not able to run KDE nor with kdm start neither with startx. kdm says that's already running. Instead X is not able to run because can't connect Xserver to display. If I stop kdm and starx again doesn't change anything. Now I edited the grub to load in nomode. In that way KDE loads but resolution is wrong and xrandr doesn't help, because if I do this: cvt 1366 768 it changes it to 1368: # 1368x768 59.88 Hz (CVT) hsync: 47.79 kHz; pclk: 85.25 MHz Modeline "1368x768_60.00" 85.25 1368 1440 1576 1784 768 771 781 798 -hsync +vsync I also installed bumblebee and nvidia drivers because of optimus technology.. It worked just to have fun with glxspheres but there isn't any gain on KDE.. This is lspci output: fabio@fabio-EasyNote-TS11HR ~ $ lspci |grep VGA 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 01:00.0 VGA compatible controller: NVIDIA Corporation GF108 [GeForce GT 540M] (rev ff My notebook is an EasyNote TS with NVIDIA GeForce GT 540M. Thank you in advance to anyone that may help!

    Read the article

  • Dynamically changing DVT Graph at Runtime

    - by mona.rakibe(at)oracle.com
    I recently came across this requirement where the customer wanted to change the graph type at run-time based on some selection. After some internal discussions we realized this can be best achieved by using af:switcher to toggle between multiple graphs. In this blog I will be sharing the sample that I build to demonstrate this.[Note] : In the below sample, every-time user changes graph type there is a server trip, so please use this approach with performance implications in mind.This sample can be downloaded  from DynamicGraph.zipSet-up: Create a BAM data control using employees DO (sample)(Refer this entry)Steps:Create the View Create a new JSF page .From component palette drag and drop "Select One Radio" into this page Enter some Label and click "Finish"In Property Editor set the "AutoSubmit" property to trueNow drag and drop "Switcher" from components into this page.In the Structure pane select the af:switcher right click and surround with "PanelGroupLayout"In Property Editor set the "PartialTriggers"  property of PanelGroupLayout to the id of af:selectOneRadioAgain in the Structure pane select the af:switcher right click and select "Insert inside af:switcher->facet"Enter Facet name (ex: pie)Again in the Structure pane select the af:switcher right click and select "Insert inside af:switcher->facet" Enter  another Facet name (ex: bar)From "Data Controls" drag and drop "Employees->Query"  into the pie facet as "Graph->Pie" (Pie: Sales_Number and Slices: Salesperson)From "Data Controls" drag and drop "Employees->Query"  into the bar facet as "Graph->Bar" (Bars :Sales_Number and X-axis : Salesperson).Now wire the switcher to the af:selectOneRadio using their "facetName" and "value" property respectively.Now run the page, notice that graph renders as per the selection by user.

    Read the article

  • SSMS Tools Pack 1.9.3 is out!

    - by Mladen Prajdic
    This release adds a great new feature and fixes a few bugs. The new feature called Window Content History saves the whole text in all all opened SQL windows every N minutes with the default being 30 minutes. This feature fixes the shortcoming of the Query Execution History which is saved only when the query is run. If you're working on a large script and never execute it, the existing Query Execution History wouldn't save it. By contrast the Window Content History saves everything in a .sql file so you can even open it in your SSMS. The Query Execution History and Window Content History files are correlated by the same directory and file name so when you search through the Query Execution History you get to see the whole saved Window Content History for that query. Because Window Content History saves data in simple searchable .sql files there isn't a special search editor built in. It is turned ON by default but despite the built in optimizations for space minimization, be careful to not let it fill your disk. You can see how it looks in the pictures in the feature list. The fixed bugs are: SSMS 2008 R2 slowness reported by few people. An object explorer context menu bug where it showed multiple SSMS Tools entries and showed wrong entries for a node. A datagrid bug in SQL snippets. Ability to read illegal XML characters from log files. Fixed the upper limit bug of a saved history text to 5 MB. A bug when searching through result sets prevents search. A bug with Text formatting erroring out for certain scripts. A bug with finding servers where it would return null even though servers existed. Run custom scripts objects had a bug where |SchemaName| didn't display the correct table schema for columns. This is fixed. Also |NodeName| and |ObjectName| values now show the same thing.   You can download the new version 1.9.3 here. Enjoy it!

    Read the article

  • How do I enable sound with the "linux-virtual" kernel?

    - by Ola Tuvesson
    I've been trying to enable sound for the linux-virtual kernel as I want to run an ultra slim Ubuntu server under VirtualBox but need audio. The resource usage difference between virtual and generic/server is surprisingly large, with the virtual kernel system using 80Mb less RAM after a clean boot (130Mb vs 210Mb), and I really want to squeeze every clock cycle and available byte I can out of the system. Besides, the virtual kernel has some additional optimisations enabled specifically for virtual machines (or so I am told). Now I have compiled my own kernel a few times in the past, for example to include the Intel-PHC module (for improved power management on Thinkpads), so the concept is not entirely alien to me, but I've run into a strange problem which I'm hoping someone can help explain: When I do a diff between the config files for Linux-generic and Linux-virtual there are precious few differences, and certainly none which pertain to sound support; there are really only five or six lines which differ, and they're mainly to do with i/o timing, sleep state and priorities. What gives? I expected the differences to be extensive, and that I would be able to identify the options that enabled audio by looking at them, but my problem doesn't seem to be related to the config file at all (yes, I know about the sound drivers section - it is identical between the two kernel configs). Am I looking in the wrong place? Many thanks!

    Read the article

  • .wine-pipelight folder not present

    - by DaimyoKirby
    Following the instructions on the pipelight installation page, I installed pipelight on Ubuntu 14.04. However, upon opening firefox the .wine-pipelight folder isn't present in my home folder, and I get the following errors: [PIPELIGHT:LIN:unknown] attached to process. [PIPELIGHT:LIN:unknown] checking environment variable PIPELIGHT_SILVERLIGHT5_1_CONFIG. [PIPELIGHT:LIN:unknown] searching for config file pipelight-silverlight5.1. [PIPELIGHT:LIN:unknown] trying to load config file from '/home/alden/.config/pipelight-silverlight5.1'. [PIPELIGHT:LIN:silverlight5.1] basicplugin.c:427:checkSilverlightGraphicDriver(): error in execlp command - probably silverlightGraphicDriverCheck not found or missing execute permission. [PIPELIGHT:LIN:silverlight5.1] basicplugin.c:441:checkSilverlightGraphicDriver(): GPU driver check - Your driver is not in the whitelist, hardware acceleration disabled. [PIPELIGHT:LIN:silverlight5.1] using wine prefix directory /home/alden/.wine-pipelight. [PIPELIGHT:LIN:silverlight5.1] checking plugin installation - this might take some time. [PIPELIGHT:LIN:silverlight5.1] basicplugin.c:374:checkPluginInstallation(): error in execvp command - probably dependencyInstaller/sandbox not found or missing execute permission. [PIPELIGHT:LIN:silverlight5.1] basicplugin.c:384:checkPluginInstallation(): Plugin installer did not run correctly (exitcode = 1). [PIPELIGHT:LIN:silverlight5.1] basicplugin.c:108:attach(): plugin not correctly installed - aborting. I've reinstalled quite a few times and ran through many of the common fixes offered on the pipelight Launchpad pages and here on AskUbunta and still it fails to run. Is there a reason why this folder isn't present, or why I'm getting these errors? Edit: Oddly enough, the .wine-pipelight folder is created wtih I open Nitro, although this still doesn't fix the issue.

    Read the article

  • What build tools do not depend on java (or Ruby)?

    - by Mohamed Meligy
    I'm wondering what generic build tools out there include their binary run-times and do not depend on another environment not shipped with them. For example, ANT requires Java, Rake requires Ruby, etc.. would be great if talking about also target-platform-agnostic tools, where I'd just give whatever command for building, whatever command for testing, etc.. and can then define my artifacts in CI or so. Would see something like that useful for building .NET projects (say, on both Windows .NET and Mono), and Node JS projects especially. I do not want to install Java and / or Ruby if what I want is a .NET build or a Node JS build. This is a bit of general awareness question not an exact problem I'm facing, that's why it's here not on StackOverflow. Update: To explain a bit more, what I'm after is the build script that would run MSBuild for compiling for example ( in .NET, and then maybe several Node/NPM commands in Node, etc..), and then have the rest build/test steps, instead of setting these all in MSBuild (again, in .NET case, also, wondering if there is equivalent story in Node).

    Read the article

  • How add cpu frequency that should be available?

    - by Andrew Redd
    I have a system with an Intel Core i7 970 that should be able to run at 3.2 GHz. I'm running ubuntu 12.04 and installed the cpufreq indicator to be able to change the governor and noticed that I only had frequencies up to 2.0 GHz available to me. I set to performance and checked with cpufreq-info cpufreq-info -c 0 cpufrequtils 007: cpufreq-info (C) Dominik Brodowski 2004-2009 Report errors and bugs to [email protected], please. analyzing CPU 0: driver: acpi-cpufreq CPUs which run at the same hardware frequency: 0 1 2 3 4 5 6 7 8 9 10 11 CPUs which need to have their frequency coordinated by software: 0 maximum transition latency: 10.0 us. hardware limits: 1.60 GHz - 2.00 GHz available frequency steps: 2.00 GHz, 1.86 GHz, 1.73 GHz, 1.60 GHz available cpufreq governors: conservative, ondemand, userspace, powersave, performance current policy: frequency should be within 1.60 GHz and 2.00 GHz. The governor "performance" may decide which speed to use within this range. current CPU frequency is 2.00 GHz (asserted by call to hardware). cpufreq stats: 2.00 GHz:4.93%, 1.86 GHz:0.03%, 1.73 GHz:0.02%, 1.60 GHz:95.02% (718654) And to double check: $ cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_available_frequencies 1995000 1862000 1729000 1596000 How do I get all the frequencies that I should have available to me, all up to the 3.2 GHz?

    Read the article

  • What is up with the Joy of Clojure 2nd edition?

    - by kurofune
    Manning just released the second edition of the beloved Joy of Clojure book, and while I share that love I get the feeling that many of the examples are already outdated. In particular, in the chapter on optimization the recommended type-hinting seems not to be allowed by the compiler. I don't know if this was allowable for older versions of Clojure. For example: (defn factorial-f [^long original-x] (loop [x original-x, acc 1] (if (>= 1 x) acc (recur (dec x) (*' x acc))))) returns: clojure.lang.Compiler$CompilerException: java.lang.UnsupportedOperationException: Can't type hint a primitive local, compiling:(null:3:1) Likewise, the chapter on core.logic seems be using an old API and I have to find workarounds for each example to accommodate the recent changes. For example, I had to turn this: (logic/defrel orbits orbital body) (logic/fact orbits :mercury :sun) (logic/fact orbits :venus :sun) (logic/fact orbits :earth :sun) (logic/fact orbits :mars :sun) (logic/fact orbits :jupiter :sun) (logic/fact orbits :saturn :sun) (logic/fact orbits :uranus :sun) (logic/fact orbits :neptune :sun) (logic/run* [q] (logic/fresh [orbital body] (orbits orbital body) (logic/== q orbital))) into this, leveraging the pldb lib: (pldb/db-rel orbits orbital body) (def facts (pldb/db [orbits :mercury :sun] [orbits :venus :sun] [orbits :earth :sun] [orbits :mars :sun] [orbits :jupiter :sun] [orbits :saturn :sun] [orbits :uranus :sun] [orbits :neptune :sun])) (pldb/with-db facts (logic/run* [q] (logic/fresh [orbital body] (orbits orbital body) (logic/== q orbital)))) I am still pulling teeth to get the later examples to work. I am relatively new programming, myself, so I wonder if I am naively looking over something here, or are if these points I'm making legitimate concerns? I really want to get good at this stuff like type-hinting and core.logic, but wanna make sure I am studying up to date materials. Any illuminating facts to help clear up my confusion would be most welcome.

    Read the article

  • Ruby workflow in Windows

    - by Rig
    I've done some searching and quite haven't come across the answer I am looking for. I do not think this is a duplicate of this question. I believe Windows could be a suitable development environment based on the mix of answers in that question. I have been developing in Ruby (mostly Rails but not entirely) for about a year now for personal projects on a Macbook Pro however that machine has faced an untimely death and has been replaced with a nice Windows 7 machine. Ruby development felt almost natural on the Mac after doing some research and setting up the typical stack. My environment then included the standard (Linux like) stuff built into OSX, Text Wrangler, Git, RVM, et al. Not too much of a deviation from what the 'devotees' tend to assume. Now I am setting up my new Windows box for continuing that development. What would my development environment look like? Should I just cave and run Linux in a VM? Ideally I would develop in Windows native. I am aware of the Windows Ruby installer. It seems decent but its not exactly as nice as RVM in the osx/linux world. Mercurial/Git are available so I would assume they play into the stack. Does one develop entirely in Windows? Does one run a webserver in a Linux VM and use it as a test bed while developing in Windows? Do it all in a VM? What does the standard Windows Ruby developer environment look like and what is the workflow? What would a typical step through be for adding a new feature to an ongoing project and what would the technology stack look like?

    Read the article

  • Oracle and Microsoft Expand Choice and Flexibility in Deploying Oracle Software in the Cloud

    - by Gene Eun
    Oracle and Microsoft have entered into a new partnership that will help customers embrace cloud computing by providing greater choice and flexibility in how to deploy Oracle software.  Here are the key elements of the partnership: Effective today, our customers can run supported Oracle software on Windows Server Hyper-V and in Windows Azure Effective today, Oracle provides license mobility for customers who want to run Oracle software on Windows Azure Microsoft will add Infrastructure Services instances with popular configurations of Oracle software including Java, Oracle Database and Oracle WebLogic Server to the Windows Azure image gallery Microsoft will offer fully licensed and supported Java in Windows Azure Oracle will offer Oracle Linux, with a variety of Oracle software, as preconfigured instances on Windows Azure Oracle’s strategy and commitment is to support multiple platforms, and Microsoft Windows has long been an important supported platform.  Oracle is now extending that support to Windows Server Hyper-V and Window Azure by providing certification and support for Oracle applications, middleware, database, Java and Oracle Linux on Windows Server Hyper-V and Windows Azure. As of today, customers can deploy Oracle software on Microsoft private clouds and Windows Azure, as well as Oracle private and public clouds and other supported cloud environments. For information related to software licensing in Windows Azure, see Licensing Oracle Software in the Cloud Computing Environment. Also, Oracle Support policies as they apply to Oracle software running in Windows Azure or on Windows Server Hyper-V are covered in two My Oracle Support (MOS) notes which are shown below: MOS Note 1563794.1 Certified Software on Microsoft Windows Server 2012 Hyper-V - NEW MOS Note 417770.1 Oracle Linux Support Policies for Virtualization and Emulation - UPDATED

    Read the article

  • How do I identify mouse clicks versus mouse down in games?

    - by Tristan
    What is the most common way of handling mouse clicks in games? Given that all you have in way of detecting input from the mouse is whether a button is up or down. I currently just rely on the mouse down being a click, but this simple approach limits me greatly in what I want to achieve. For example I have some code that should only be run once on a mouse click, but using mouse down as a mouse click can cause the code to run more then once depending on how long the button is held down for. So I need to do it on a click! But what is the best way to handle a click? Is a click when the mouse goes from mouse up to down or from down to up or is it a click if the button was down for less then x frames/milliseconds and then if so, is it considered mouse down and a click if its down for x frames/milliseconds or a click then mouse down? I can see that each of the approaches can have their uses but which is the most common in games? And maybe i'll ask more specifically which is the most common in RTS games?

    Read the article

  • How to use Nintex Reusable Workflow Template

    - by ybbest
    If you like to re-use your workflow logic over more than one list or library, you can create reusable workflow template. Here are the steps 1. Go to site settings and create reusable workflow template. 2. Select the content type you like the template to bound to and give a workflow a title. 3.Create your workflow the same way as you did for a list workflow and publish your workfow. 4. Finally, you need add your workflow to the list you like to run your workflow. 5. Go to workflow settings and add a Workflow. 6. Select the content type and configure the workflow as below 7. After you done this, your workflow will run as usual. Note: 1. You cannot conditionally start your workflow. 2. Your workflow is not automatically bound to the list when you add the content type to the list, you need to configure it manually as shown in step 4-6.

    Read the article

  • Packing jar files into library jar files

    - by Hillel
    Firstly, this question is not about packing a simple jar file (e.g. lwjgl) into a runnable jar file. I know how to do this using JarSplice. So if I have a game which uses JInput, I will pack my game jar and jinput.jar using JarSplice and enter the natives in the process. The problem arises when I want to create a custom library that uses JInput, and then pack that into my games. See, the whole idea of writing a game library is that I don't ever have to even copy code like the wrapper I wrote for JInput Controller, and I always have a definitive version inside a library jar. Basically what I wanna do is create a jar file of my library, pack jinput.jar into it using JarSplice, possibly with the natives as well, and then when I want to export a jar of my game, I either export it automatically through Eclipse with the library jar, or, if that doesn't work, use JarSplice. I've tried several solutions, and nothing works. When I try to pack the game jar and the library jar using JarSplice, I get an error saying that there's either duplicate .project or .classpath. When I try to export my game through Eclipse with the library jar, it won't run (which is to be expected), but then, if I try to attach the natives with JarSplice, it doesn't give me any errors but the jar doesn't run. I'm not expecting anyone to solve this, but if anyone has an idea, something that will allow me to never look at the Gamepad code ever again, that would be awesome. I don't care if I have to package my library jar using JarSplice 5 times, and then do the same with the game jar, as long as it works. Otherwise I'll just have to copy the Gamepad class into every project alongside the library jar. :(

    Read the article

  • Prevent Eclipse Java Builder from Compiling Java-Like Source

    - by redjamjar
    I'm in the process of writing an eclipse plugin for my programming language Whiley (see http://whiley.org). The plugin is working reasonably well, although there's lots to do. Two pieces of the jigsaw are: I've created a "Whiley Builder" by subclassing incremental project builder. This handles building and cleaning of "*.whiley" files. I've created a content-type called "Whiley Source Files" for "*.whiley" files, which extends "org.eclipse.jdt.core.javaSource" (this follows Andrew Eisenberg suggestion). The advantage of having the content-type extend javaSource is that it immediately fits into the package explorer, etc. In principle, I could fleshout ICompilationUnit to provide more useful info, although I haven't done that yet. The disadvantage is that the Java builder is trying to compile my whiley files ... and it obviously can't. Originally, I had the Java Builder run first, then the Whiley builder. Superficially, this actually worked out quite well since all of the errors from the Java Builder were discarded by the Whiley Builder (for whiley files). However, I actually want the Whiley Builder to run first, as this is the best way for me to resolve dependencies between Java and Whiley files. Which leads me to my question: can I stop the Java builder from trying to compile certain java-like resources? Specifically, in my case, those with the "*.whiley" extension. As an alternative, I was wondering whether my Whiley Builder could somehow update the resource delta to remove those files which it has dealt with. Thoughts?

    Read the article

  • Sun2Oracle: Upgrading from DSEE to the next generation Oracle Unified Directory - webcast follow up

    - by Darin Pendergraft
    Thanks to all of the guest speakers on our Sun2Oracle webcast: Steve from Hub City Media, Albert from UCLA and our own Scott Bonell. During the webcast, we tried to answer as many questions as we could, but there were a few that we needed a bit more time to answer.  Albert from UCLA sent me the following information: Alternate Directory EvaluationWe were happy with Sun DSEE. OUD, based on the research we had done, was a logical continuation of DSEE.  If we moved away, it was to to go open source. UCLA evaluated OpenLDAP, OpenDS, Red Hat's 389 Directory. We also briefly entertained Active Directory. Ultimately, we decided to stay with OUD for the Enterprise Directory, and adopt OpenLDAP for the non-critical edge directories.HardwareFor Enterprise Directory, UCLA runs 3 Dell PowerEdge R710 servers. Each server has 12GB RAM and 2 2.4GHz Intel Xeon E5 645 processors. We run 2 of those servers at UCLA's Data Center in a semi active-passive configuration. The 3rd server is located at UCLA Berkeley. All three are multi master replicated. At run time, the bulk of LDAP query requests go to 1 server. Essentially, all of our authn/authz traffic is being handled by 1 server, with the other 2 acting as redundant back ups.

    Read the article

  • Cron not able to succesfully change background

    - by Solenoid
    I'm running 12.04 with a custom XML background (modification on Day of Ubuntu) that changes based on time of day. I've noticed that there's a significant delay between when the changes are scheduled to take place in the XML file and when they actually show up on the background. I've also noticed that when I resume from suspend I don't get the correct background image either. I've found that cycling the wallpaper manually will fix this, and I've written a script to automate the process. If I execute the script manually it works fine. However, when I schedule the script to run in cron, cron doesn't change the background. To make sure that the script was being run properly by cron, I had it create a directory in my home folder after running the background change, and the directory is created successfully, so I know cron is running and executing the script. My script: #!/bin/bash sleep 5 gsettings set org.gnome.desktop.background picture-uri file:///home/zak/Pictures/Wallpaper/DOU2.xml sleep 1 gsettings set org.gnome.desktop.background picture-uri file:///home/zak/Pictures/Wallpaper/DOU.xml sleep 1 mkdir /home/zak/iscronworking exit Is cron just not able to access gsettings? The job is on my user crontab so it shouldn't be running as root. Alternately, is there any way to make Precise play nicely with XML wallpaper?

    Read the article

  • Problems with mounting .ISO files

    - by user89599
    I'm using Precise, with GNOME. I've attempted to install some retro, multi-cd games (KOTOR1) via .ISO images and WINE, but I can't get the ISO's to mount correctly. First I tried GMountISO, which showed a read-only warning but worked - until I went to unmount. When the installation program asked for CD 2 I couldn't unmount from the /cdrom folder because neither GMountISO or umount from terminal could detect the mount. After a reboot, I changed to GISOMount (different somehow, I guess?), but when I attempt to mount the ISO I get an error window explaining the syntax of the mount command and, which is also what I get when I attempt to use mount from terminal. After checking /media from terminal on a lark I see the disc mounted there twice over, but umount won't recognize it, even when I specify the full path sudo umount /media/KOTOR_1.iso. It cleared up upon reboot. Can someone please assist? UPDATE :: Thanks for the quick response. What's weird, is the images are like stuck in limbo... I'll show you: sc@sc-HP-110-3700:/media$ ls cdrom KOTOR_1(0)(vcd) KOTOR_1(vcd) sc@sc-HP-110-3700:/media$ cd cdrom sc@sc-HP-110-3700:/media/cdrom$ ls sc@sc-HP-110-3700:/media/cdrom$ cd .. sc@sc-HP-110-3700:/media$ umount KOTOR_1(vcd) bash: syntax error near unexpected token `(' sc@sc-HP-110-3700:/media$ umount KOTOR_1.ISO umount: KOTOR_1.ISO is not mounted (according to mtab) sc@sc-HP-110-3700:/media$ sudo umount -a umount: /run/shm: device is busy. (In some cases useful info about processes that use the device is found by lsof(8) or fuser(1)) umount: /run: device is busy. (In some cases useful info about processes that use the device is found by lsof(8) or fuser(1)) umount: /dev: device is busy. (In some cases useful info about processes that use the device is found by lsof(8) or fuser(1)) umount: /: device is busy. (In some cases useful info about processes that use the device is found by lsof(8) or fuser(1)) sc@sc-HP-110-3700:/media$

    Read the article

  • How can I access one desktop session from another on the same machine?

    - by d3vid
    I want to run a desktop session as user A, and from that session access a different desktop session as user B. This way I can test, screencast or share my screen from session B, while having access to apps/resources in session A that I do not want running/visible in session B. What application can I do this with? I assume some kind of a remote desktop client/server is what I'm looking for. So far I have tried: VNC. Logged in as user A and user B. In session B run Desktop Sharing. Switched to session A. Tried to access share with Remmina. Failed. (Can get image to appear but it's frozen.) x2go. Installed server and client from stable PPA (needed a workaround for installation to succeed). Created a connection which starts then fails instantly. Discovered mailing list post suggesting that accessing localhost is not supported. On the non-remote front: VirtualBox. Created a minimal virtual machine for session B. Too resource heavy. Am I attempting the impossible? Should I be looking for something other than a remote desktop tool?

    Read the article

  • wynapse.com down last night, SC tonight

    - by Dave Campbell
    In an industry segment that nobody is ever 'asleep', I suppose no time is a good time to take SQL Server down for upgrades, and I had forgotten that my host was going to be doing that. Last night about 9pm (Arizona), in the middle of working on a blog post, things started going wonky and I finally realized everything was ok except for SQL Server. I turned in a ticket on it and was reminded about the maintenance schedule... guess I file those away in memory and just assume they'll happen while I'm asleep :) So, looking at the schedule, it appears that SQL Server for SilverlightCream is going to be down tonight. Minimum is 9-12pm Arizona time... mileage and time may vary. Since all the posts are run through SC to get the Skim count, having SQL down sucks, but I'd rather we got maintenance than have to react to a crash because of something that wasn't maintained. I'll try to get the next 'Cream post out early so that the bulk of folks can dig through it prior to the outage. Meanwhile, for those of you in Phoenix, tonight is our Phoenix Silverlight User Group April meeting, and Joel Neubeck is going to be giving us the run-through on Windows Phone 7! We're not as advanced as those MVP rock-stars in California like Victor Gaudioso who streams his user group meeting, so you'll just have to show up for the goodness! And for anyone that's interested, here's some WP7 bling for your desktop... I want some of this real bad for my laptop! Get the full image in the post by Ozymandias:

    Read the article

  • How to reset display settings in XFCE \ Ubuntu 12.04 and also flgrx drivers

    - by Agent24
    I recently upgraded to Ubuntu 12.04 and since I hate unity I installed the Xubuntu package and am using XFCE instead. Since I have a Radeon HD5770 I also installed the fglrx drivers. This all went fine (aside from the fact that the post-release update fglrx drivers have an error on installation and Ubuntu thinks they're not installed when they actually are. I configured my display settings (dual monitors, a 17" CRT on VGA and a 17" LCD on DVI) in the amdcccle program and everything was perfect. THEN, 2 days ago, I accidentally clicked on the "Display" settings in XFCE "settings" manager. After that, everything got screwed. Now, I normally run the CRT at 1152x854 and the LCD at 1280x1024 with the CRT as my primary monitor (with panel) and the LCD without panels etc just to display other windows when I want to drag them over there. The problem is now that if I set my CRT to 1152x864, it stays at 1280x1024 virtually and half the stuff falls off the screen. It also puts the LCD at 1280x1024 BUT then overlays the CRT's display ontop with different wallpaper in an L shape down the right-hand and bottom edges. In short, nothing makes sense and everything is FUBAR. I tried uninstalling fglrx through synaptic, and renaming xorg.conf and also the xfce XML file that has monitor settings but it still won't make sense. Unity on the other hand can currently set everything normally so the problem appears to be only with XFCE. In any case, I can't even get the fglrx drivers back, when I re-installed them, I can't run amdccle anymore as it says the driver isn't installed!! Can someone help me reset my XFCE settings so the monitors aren't screwed with some incorrect virtual desktop size and also so I can get fglrx drivers back and working? I really don't want to have to format and reinstall and go through all the hassle but it looks like I may have to :(

    Read the article

  • android game performance regarding timers

    - by iQue
    Im new to the game-dev world and I have a tendancy to over-simplify my code, and sometimes this costs me alot fo memory. Im using a custom TimerTask that looks like this: public class Task extends TimerTask { private MainGamePanel panel; public Task(MainGamePanel panel) { this.panel=panel; } /** * When the timer executes, this code is run. */ public void run() { panel.createEnemies(); } } this task calls this method from my view: public void createEnemies() { Bitmap bmp = BitmapFactory.decodeResource(getResources(), R.drawable.female); if(enemyCounter < 24){ enemies.add(new Enemy(bmp, this)); } enemyCounter++; } Since I call this in the onCreate-method instead of in my views contructor (because My enemies need to get width and height of view). Im wondering if this will work when I have multiple levels in game (start a new intent). And if this kind of timer really is the best way to add a delay between the spawning-time of my enemies performance-wise. adding code for my timer if any1 came here cus they dont understand timers: private Timer timer1 = new Timer(); private long delay1 = 5*1000; // 5 sec delay public void surfaceCreated(SurfaceHolder holder) { timer1.schedule(new Task(this), 0, delay1); //I call my timer and add the delay thread.setRunning(true); thread.start(); }

    Read the article

  • New cloud development workflow using Github, Cloud9ide and CloudFoundry.

    - by weng
    So time is changing towards cloud development/computing. I'm trying to get the new "cloud" workflow based on the services I'm going to use: Github, Cloud9ide and CloudFoundry. Here is what is on my mind: Github acts like a central (main repo) just like yesterday's local filesystem. Every service will base it service upon this main repo. Workflow: Github: I create a new Github repo served as main repo for the project. Cloud9ide. I open my Github repo and write my tests and implementation (BDD/TDD). When I'm ready I save (commit) it to main repo on Github. X: A running instance of Jenkins detects someone has committed and fetches the latest commit, builds, deploys, tests (yeti and/or selenium) and reports if the tests were passed or not. If not, I make another commit til all tests are passing. X: I run the CloudFoundry commands to push the main Github repo to CloudFoundry's server and it will deploy my app automatically. What I'm still confused about is where this X environment will be. On a local server where I have to install Jenkins? Or could I install it on Cloud9ide (when java is supported) or will it be on another cloud service? Also, that X environment has to be able to fetch (clone) the Github repo and run the build scripts. And since the concept of Cloud9ide is very new and there haven't been any other predecessors I really wonder how the workflow will look like. We all know Github's workflow. We now know CloudFoundry's workflow (deploy/scale with a restful API/command line tool). But how Cloud9Ide will operate is still somewhat unclear to me. Someone on Cloud9ide mentioned that there will be buttons like deploy so I can deploy with one click. But that I guess will depend on what services that deploy process will hook up into etc. Could someone enlighten this cloud workflow topic and fill in the gaps. Thanks.

    Read the article

  • Acer Aspire One 725 - missing graphic card driver?

    - by Melon
    Recently I bought an Acer Aspire One 725 Netbook and installed Ubuntu 12.10 on it. I bought it, because it can run HD movies and has Full HD on external VGA port. However, movies from youtube have a really slow framerate. If you open three tabs in Opera (for example g-mail, youtube and askubuntu) it gets really laggy. My suspicion is that the driver for graphic card is missing. When I check the System->Details->Graphics the driver is unknown. After running lspci | grep VGA I get this output: 00:01.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI Device 980a From what I see, I have a AMD C70 processor integrated with (or something similar) AMD Radeon HD 6290. Has anyone had the same problem? Do you know which drivers need to be installed for the graphics to work properly? On official Acer page there are only drivers for Win7 and Win8... Update: I have tried installing fglrx but I get the following error (either I don't have libraries or someone didn't make a clean build before release ;) /lib/modules/fglrx/build_mod/2.6.x/firegl_public.c: In function ‘KCL_MEM_AllocLinearAddrInterval’: /lib/modules/fglrx/build_mod/2.6.x/firegl_public.c:2124:5: error: implicit declaration of function ‘do_mmap’ [-Werror=implicit-function-declaration] /lib/modules/fglrx/build_mod/2.6.x/firegl_public.c:2124:13: warning: cast to pointer from integer of different size [-Wint-to-pointer-cast] /lib/modules/fglrx/build_mod/2.6.x/firegl_public.c: In function ‘kasInitExecutionLevels’: /lib/modules/fglrx/build_mod/2.6.x/firegl_public.c:4159:5: error: ‘cpu_possible_map’ undeclared (first use in this function) /lib/modules/fglrx/build_mod/2.6.x/firegl_public.c:4159:5: note: each undeclared identifier is reported only once for each function it appears in /lib/modules/fglrx/build_mod/2.6.x/firegl_public.c:4159:5: warning: left-hand operand of comma expression has no effect [-Wunused-value] Update 2: After fixing the erros in compilation, ubuntu acts bizarre and unstable (no left icon panel, no upper panel, cannot run any programs, I only see desktop)

    Read the article

  • Should one generally develop a client library for REST services to help prevent API breakages?

    - by BestPractices
    We have a project where UI code will be developed by the same team but in a different language (Python/Django) from the services layer (REST/Java). The code for each layer exits in different code repositories and which can follow different release cycles. I'm trying to come up with a process that will prevent/reduce breaking changes in the services layer from the perspective of the UI layer. I've thought to write integration tests at the UI layer level that we'll run whenever we build the UI or the services layer (we're using Jenkins as our CI tool to build the code which is in two Git repos) and if there are failures then something in the services layer broke and the commit is not accepted. Would it also be a good idea (is it a best practice?) to have the developer of the services layer create and maintain a client library for the REST service that exists in the UI layer that they will update whenever there is a breaking change in their Service API? Conceivably, we would then have the advantage of a statically-typed API that the UI code builds against. If the client library API changes, then the UI code won't compile (so we'll know sooner that there was a breaking change). I'd also still run the integration tests upon building the UI or services layer to further validate that the integration between UI and the service(s) still works.

    Read the article

  • Why ADF Developers Should Attend ODTUG This Year

    - by shay.shmeltzer
    If you are using Oracle ADF or planning to pick it up in the next year, I would encourage you to try and attend this year's ODTUG K-Scope conference. If you are not familiar with it, ODTUG - the Oracle Development Tools User Groups - holds a yearly conference that is very technical in nature. It is not a huge conference in terms of the number of attendees, but this just means that you have more opportunities to interact with Oracle ACEs, Oracle Product Managers, and other developers. The conference is known to be a no-fluff, no-marketing, technical conference. This year however there is one key new thing that should be of interest to readers of this blog. A new track called the "Fusion Middleware" track has been formed and it has lots of sessions for any level of ADF developer. The track is run by several Oracle ACEs who are also involved in the ADF Enterprise Methodology Group. They have sessions for every level of ADF awareness - from the beginner to the expert, and you can also learn about related technologies such as WebCenter and SOA Suite. Most of the sessions are run by users who share their real world experience with the technology. And me and other PMs will also be running a few sessions and hands-on labs there. Check out the list of sessions in the Fusion Middleware track. And don't miss the Sunday symposium too.

    Read the article

< Previous Page | 454 455 456 457 458 459 460 461 462 463 464 465  | Next Page >