Search Results

Search found 23811 results on 953 pages for 'javax script'.

Page 816/953 | < Previous Page | 812 813 814 815 816 817 818 819 820 821 822 823  | Next Page >

  • How do I stop icons appearing on the desktop in a particular area?

    - by Seamus
    When I download something to my desktop, or insert a CD or flash drive, the icon appears on my desktop. When I have conky running, the icon sometimes appears in the top right corner, underneath conky; where I can't see it. How do I stop this happening? My .conkyrc is pasted below. I didn't write it all myself, so I'm not entirely sure what I need to change, or what parts are relevant for this particular question... # UBUNTU-CONKY # A comprehensive conky script, configured for use on # Ubuntu / Debian Gnome, without the need for any external scripts. # # Based on conky-jc and the default .conkyrc. # INCLUDES: # - tail of /var/log/messages # - netstat shows number of connections from your computer and application/PID making it. Kill spyware! # # -- Pengo # # Create own window instead of using desktop (required in nautilus) own_window yes own_window_type override own_window_transparent yes own_window_hints undecorated,below,sticky,skip_taskbar,skip_pager # Use double buffering (reduces flicker, may not work for everyone) double_buffer yes # fiddle with window use_spacer right # Use Xft? use_xft yes xftfont DejaVu Sans:size=8 xftalpha 0.8 text_buffer_size 2048 # Update interval in seconds update_interval 3.0 # Minimum size of text area # minimum_size 250 5 # Draw shades? draw_shades no # Text stuff draw_outline no # amplifies text if yes draw_borders no uppercase no # set to yes if you want all text to be in uppercase # Stippled borders? stippled_borders 3 # border margins border_margin 9 # border width border_width 10 # Default colors and also border colors, grey90 == #e5e5e5 default_color grey own_window_colour brown own_window_transparent yes # Text alignment, other possible values are commented #alignment top_left alignment top_right #alignment bottom_left #alignment bottom_right # Gap between borders of screen and text gap_x 10 gap_y 20 # stuff after 'TEXT' will be formatted on screen TEXT $color ${color orange}SYSTEM ${hr 2}$color $nodename $sysname $kernel on $machine ${color orange}CPU ${hr 2}$color ${freq}MHz Load: ${loadavg} Temp: ${acpitemp} $cpubar ${cpugraph 000000 ffffff} NAME ${goto 150}PID ${goto 200}CPU% ${goto 250}MEM% ${top name 1} ${goto 150}${top pid 1} ${goto 200}${top cpu 1} ${goto 250}${top mem 1} ${top name 2} ${goto 150}${top pid 2} ${goto 200}${top cpu 2} ${goto 250}${top mem 2} ${top name 3} ${goto 150}${top pid 3} ${goto 200}${top cpu 3} ${goto 250}${top mem 3} ${top name 4} ${goto 150}${top pid 4} ${goto 200}${top cpu 4} ${goto 250}${top mem 4} ${color orange}MEMORY / DISK ${hr 2}$color RAM: $memperc% ${membar 6}$color Swap: $swapperc% ${swapbar 6}$color Home: ${fs_free_perc /home}% ${fs_bar 6 /}$color Free Space: ${fs_free /home} ${color orange}NETWORK (${addr eth0}) ${hr 2}$color Down: $color${downspeed eth0} k/s ${alignr}Up: ${upspeed eth0} k/s ${downspeedgraph eth0 25,140 000000 ff0000} ${alignr}${upspeedgraph eth0 25,140 000000 00ff00}$color Total: ${totaldown eth0} ${alignr}Total: ${totalup eth0} ${execi 30 netstat -ept | grep ESTAB | awk '{print $9}' | cut -d: -f1 | sort | uniq -c | sort -nr} ${color orange}WIRELESS (${addr wlan0}) ${hr 2}$color Down: $color${downspeed wlan0} k/s ${alignr}Up: ${upspeed wlan0} k/s ${downspeedgraph wlan0 25,140 000000 ff0000} ${alignr}${upspeedgraph wlan0 25,140 000000 00ff00}$color Total: ${totaldown wlan0} ${alignr}Total: ${totalup wlan0} ${execi 30 netstat -ept | grep ESTAB | awk '{print $9}' | cut -d: -f1 | sort | uniq -c | sort -nr} Conky solutions have been offered, but perhaps these aren't the best way of approaching it. What I really want is to stop icons even appearing in that part of the desktop window: that is, I want to make part of the desktop real estate "off-limits" to new icons appearing on the desktop.

    Read the article

  • GPU hung when switching graphic card

    - by Lie Ryan
    I have a laptop (Dell Inspiron N4110) with a switchable graphic. $ lspci | grep VGA 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 01:00.0 VGA compatible controller: ATI Technologies Inc NI Whistler [AMD Radeon HD 6600M Series] (rev ff) Normally, my laptop starts with both graphic cards enabled, which caused the laptop to turn very hot and the fan to become very noisy. I have been using a small script to disable the Radeon card. For some time, I'm quite happy with this arrangement. However, I have been having some issues with the Intel card (IGD), the Intel card often randomly hang when running OpenGL apps; and so I want to give the Radeon card (DIS) another chance. I have never been able to switch to the Radeon card, but recently, I found out that if I do a "delayed switching" (DDIS): # echo "DDIS" > /sys/kernel/debug/vgaswitcheroo/switch root@lieryan-dell-ubuntu:/sys/kernel/debug/vgaswitcheroo# cat switch 0:IGD:+:Pwr:0000:00:02.0 1:DIS: :Pwr:0000:01:00.0 then I logoff (i.e. to restart X), the screen switch to pseudo-tty and then it stuck there freezing. At this situation, mouse and keyboard stops working so I can't switch to another ptty. I tried ssh-ing from another computer to salvage logs (dmesg at that point) and whatnot; I found out that when freezing, the active graphic card is the AMD card: -- this is from ssh -- # cat switch 0:IGD: :Off:0000:00:02.0 1:DIS:+:Pwr:0000:01:00.0 but the GPU is apparently hung, looking at dmesg gives: ... [ 1411.649974] vga_switcheroo: client 0 refused switch [ 1411.649985] vga_switcheroo: setting delayed switch to client 1 [ 1423.911759] vga_switcheroo: processing delayed switch to 1 [ 1424.006564] fbcon: Remapping primary device, fb1, to tty 1-63 [ 1424.006799] i915: switched off [ 1424.840351] [drm:drm_mode_getfb] *ERROR* invalid framebuffer id [ 1425.718088] [drm:drm_mode_getfb] *ERROR* invalid framebuffer id [ 1426.622377] [drm:drm_mode_getfb] *ERROR* invalid framebuffer id [ 1427.355683] [drm:drm_mode_getfb] *ERROR* invalid framebuffer id [ 1428.193549] [drm:drm_mode_getfb] *ERROR* invalid framebuffer id ... the invalid framebuffer id error is repeated for many times over ... I were able to successfully recover by switching back to the Intel card and restarting X from ssh; indicating that only the Radeon card has problems switching. System info: $ uname -a Linux lieryan-dell-ubuntu 3.0.0-14-generic #23-Ubuntu SMP Mon Nov 21 20:28:43 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 11.10 Release: 11.10 Codename: oneiric The laptop also do not have the option to set graphic card at BIOS and the proprietary driver, fglrx, also have never worked; when I installed it through jockey ("Additional Drivers"), glxinfo showed that it still being rendered by Mesa, the /sys/kernel/debug/vgaswitcheroo directory has gone missing, and the driver crashes with a traceback if I use xorg.conf to tell X to use fglrx. Anyone had any idea if it is possible to use this AMD card either with the radeon or the fglrx driver? logs: dmesg

    Read the article

  • Forms&Reports upgrade characterset issues

    - by Lukasz Romaszewski
    Hello,This quick post is based on my findings during recent IMC workshops, especially those related to upgrading the Forms 6i/9i/10g applications to Forms 11g platform. The upgrade process itself is pretty straightforward and it basically requires recompiling your Forms application with a latest version of frmcmp tool. For some cases though, especially when you migrate from Forms 6i which is a client-server architecture to a 3-tier web solution (Forms 11g), you need to rewrite some parts of your code to make it run on new platform. The things you need to change range from reimplementing (using webutil library) typical client-site functionality like local IO operation, access to WinAPI, invoking DLLs etc. to changing deprecated or obsolete APIs like RUN_PRODUCT to RUN_REPORT_OBJECT. To automate those changes Oracle provides complete Java API  which allows you to manipulate the code and structure of you modules (JDAPI). To make it even easier we can use Forms Migration Assistant tool (written in Java using JDAPI) which is able to replace all occurrences of old API entries with their 11g equivalents or warn you when the replacement is not possible. You can also add your own replacement definitions in the search_replace.properties file. But you need to be aware of some issues that can be encountered using this tool. First of all if you are using some hard-coded text inside your triggers you may notice that after processing them by the Migration Assistant tool the national characters may be lost. This is due to the fact that you need to explicitly tell Java application (which MA really is) what kind of characterset it should use to read those text properly. In order to do that just add to a script calling MA the following line:  export JAVA_TOOL_OPTIONS=-Dfile.encoding=<JAVA_ISO_ENCODING>  when the particular encoding must match the NLS_LANG in your Forms Builder environment (for example for Polish characterset you need to use ISO-8859-2).Second issue you can encounter related to national charactersets is lack of national symbols in you reports after migration. This can be solved by adding appropriate NLS_LANG entry in your reports environment. Sometimes instead of particular characterset you see "Greek characters" in your reports. This is just default font used by reports engine instead of the one defined in your report. To solve it you must copy fonts definitions from your old environment (e.g. Forms 10g installation) to appropriate directory in new installation (usually AFM folder). For more information about this and other issues please refer to https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doctype=BULLETIN&id=1297012.1at My Oracle Support site. That's all for today, stay tuned for more posts on this topic! Lukasz

    Read the article

  • Choose Custom New Tab Pages in Chrome

    - by Asian Angel
    For most people the default New Tab Page in Chrome works perfectly well for their purposes. But if you would prefer to choose what opens in a new tab for yourself then you will definitely want to have a look at the “Define your own new tab!” extension for Google Chrome. Before Unless you are using a Speed Dial (or similar) extension each time you click on the “New Tab Button” you get the same old page. It would certainly be a lot more satisfying if you could choose custom webpage(s) to open as new tabs. After Once you have the extension installed the best thing to do is click on the “New Tab Button”. That will open up the “Options Page” where you can enter one or two custom website URLs of your choosing. Once you have your custom URLs entered click on “Save”. As soon as you click on the “New Tab Button” your new custom webpage(s) will open. If you chose two webpages the first choice will open focused on the “right side” instead of the “left”. Clicking on the “Home Button” will also open the webpage(s) that you chose. The webpage(s) that you chose will also open as your starting “Home Pages” each time that you start your browser. Conclusion If you have wanted to choose your own custom “New Tab Page” then this is the extension that you have been waiting for. Links Download the Define your own new tab! extension (Google Chrome Extensions) Similar Articles Productive Geek Tips Find Similar Websites in Google ChromeAccess Google Chrome’s Special Pages the Easy WayEnable Vista Black Style Theme for Google Chrome in XPSet Custom Reload Times for Individual Webpages in ChromeEnable Auto-Paging Goodness in Google Chrome TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Play Music in Chrome by Simply Dragging a File 15 Great Illustrations by Chow Hon Lam Easily Sync Files & Folders with Friends & Family Amazon Free Kindle for PC Download Stretch popurls.com with a Stylish Script (Firefox) OldTvShows.org – Find episodes of Hitchcock, Soaps, Game Shows and more

    Read the article

  • SCVMM – Round 2 – How to create a Private Cloud using PowerShell

    - by Waclaw Chrabaszcz
    Originally posted on: http://geekswithblogs.net/Wchrabaszcz/archive/2013/06/28/scvmm--round-2--how-to-create-a-private.aspxHave you ever seen "A Bridge too far" movie? To not to wake up a click too far, it is good to script some tasks. Yes of course we can follow wizards, but some of us want to be warriorsJ. A small tip, take a look on credentials and system GUID examples. I don't know how about you, but for me it will be really useful in the future.    # credents$credential = Get-CredentialNew-SCRunAsAccount -Name "TESTDOMAIN\Administrator" -Credential $credential #storage $opsMgrServerCredential = Get-SCRunAsAccount -Name "TESTDOMAIN\Administrator"New-SCStorageClassification -Name "Bronze" -Description "" –RunAsynchronouslyNew-SCStorageClassification -Name "Silver" -Description "" –RunAsynchronouslyNew-SCStorageClassification -Name "Gold" -Description "" –RunAsynchronously # add a shared storageFind-SCComputer -ComputerName "dc.TESTDOMAIN.net"Add-SCStorageProvider -AddWindowsNativeWmiProvider -Name "dc.TESTDOMAIN.net" -RunAsAccount $opsMgrServerCredential -ComputerName "dc.TESTDOMAIN.net"$fileServer = Get-SCStorageFileServer "dc.TESTDOMAIN.net"$fileShares = @()$fileShares += Get-SCStorageFileShare -Name "VMMLibrary"Set-SCStorageFileServer -StorageFileServer $fileServer -AddStorageFileShareToManagement $fileShares –RunAsynchronously #fabric network$logicalNetwork = New-SCLogicalNetwork -Name "TESTDOMAIN-Service-Network" -LogicalNetworkDefinitionIsolation $false -EnableNetworkVirtualization $true -UseGRE $true -IsPVLAN $false$allHostGroups = @()$allHostGroups += Get-SCVMHostGroup -Name "All Hosts"$allSubnetVlan = @()$allSubnetVlan += New-SCSubnetVLan -Subnet "10.0.0.0/24" -VLanID 0New-SCLogicalNetworkDefinition -Name "TESTDOMAIN-Service-Network_0" -LogicalNetwork $logicalNetwork -VMHostGroup $allHostGroups -SubnetVLan $allSubnetVlan #IP pool$logicalNetwork = Get-SCLogicalNetwork -Name "TESTDOMAIN-Service-Network"$logicalNetworkDefinition = Get-SCLogicalNetworkDefinition -LogicalNetwork $logicalNetwork -Name "TESTDOMAIN-Service-Network_0" # Gateways$allGateways = @()$allGateways += New-SCDefaultGateway -IPAddress "10.0.0.1" –Automatic# DNS servers $allDnsServer = @("10.0.0.1")# DNS suffixes$allDnsSuffixes = @("TESTDOMAIN.net")# WINS servers$allWinsServers = @()New-SCStaticIPAddressPool -Name "TESTDOMAIN-Service-Network" -LogicalNetworkDefinition $logicalNetworkDefinition -Subnet "10.0.0.0/24" -IPAddressRangeStart "10.0.0.51" -IPAddressRangeEnd "10.0.0.75" -DefaultGateway $allGateways -DNSServer $allDnsServer -DNSSuffix "" -DNSSearchSuffix $allDnsSuffixes –RunAsynchronously #Hyper-V Virtual Networks$logicalNetwork = Get-SCLogicalNetwork -Name "TESTDOMAIN-Service-Network" $vmNetwork = New-SCVMNetwork -Name "TESTDOMAIN-VMN" -LogicalNetwork $logicalNetwork -IsolationType "WindowsNetworkVirtualization" -CAIPAddressPoolType "IPV4" -PAIPAddressPoolType "IPV4"Write-Output $vmNetwork$subnet = New-SCSubnetVLan -Subnet "10.0.0.0/24"New-SCVMSubnet -Name "Con-SN" -VMNetwork $vmNetwork -SubnetVLan $subnet # bind VLAN with the Network Adapter$vmHost = Get-SCVMHost -ComputerName "VMM01.TESTDOMAIN.net"$vmHostNetworkAdapter = Get-SCVMHostNetworkAdapter -VMHost $vmHost #-Name "Intel 21140-Based PCI Fast Ethernet Adapter (Emulated)"Set-SCVMHostNetworkAdapter -VMHostNetworkAdapter $vmHostNetworkAdapter -Description "" -AvailableForPlacement $true -UsedForManagement $true $logicalNetwork = Get-SCLogicalNetwork -Name "TESTDOMAIN-Service-Network" Set-SCVMHostNetworkAdapter -VMHostNetworkAdapter $vmHostNetworkAdapter -AddOrSetLogicalNetwork $logicalNetworkSet-SCVMHost -VMHost $vmHost -RunAsynchronously -NumaSpanningEnabled $true #Create a Private Cloud$Guid = [System.Guid]::NewGuid()Set-SCCloudCapacity -JobGroup $Guid -UseCustomQuotaCountMaximum $false -UseMemoryMBMaximum $false -UseCPUCountMaximum $false -UseStorageGBMaximum $false -UseVMCountMaximum $false -CustomQuotaCount 10 -MemoryMB 10240 -CPUCount 10 -StorageGB 386 -VMCount 10$resources = @()$resources += Get-SCLogicalNetwork -Name "TESTDOMAIN-Service-Network"$resources += Get-SCLoadBalancer -Manufacturer "Microsoft"$readonlyLibraryShares = @()$readonlyLibraryShares += Get-SCLibraryShare | where { $_.LibraryServer.Name -eq "dc.TESTDOMAIN.net" -and $_.Name -eq "VMMLibrary" }$addCapabilityProfiles = @()$addCapabilityProfiles += Get-SCCapabilityProfile -Name "Hyper-V"$Guid2 = [System.Guid]::NewGuid()Set-SCCloud -JobGroup $Guid2 -RunAsynchronously -AddCloudResource $resources -AddReadOnlyLibraryShare $readonlyLibraryShares -AddCapabilityProfile $addCapabilityProfiles$hostGroups = @()$hostGroups += Get-SCVMHostGroup -Name "TESTDOMAIN"New-SCCloud -VMHostGroup $hostGroups -Name "TESTDOMAIN-Cloud" -Description "" –RunAsynchronously

    Read the article

  • Data Generator Source Adapter

    This component needs little explanation. It generates random integer (DT_I4) and string (DT_WSTR) data and places them in the pipeline. You specify how many columns of each you would like and for any string columns you pass a fixed length value. You then need to specify how many rows in total you require to be generated. This component is used by us to do testing of the pipeline and components downstream. Previously we would have used a script component (as a source) to generate the rows but found ourselves rewriting the code too often so created this component. Screenshots SQL Server 2005 Integration Services SQL Server 2008/2012 Integration Services The component is provided as an MSI file, however to complete the installation, you will have to add the transformation to the Visual Studio toolbox manually. Right-click the toolbox, and select Choose Items.... Select the SSIS Data Flow Items tab, and then check the Data Generator Source from the list. Downloads The Data Generator Source Adapter is available for SQL Server 2005, SQL Server 2008 (includes R2) and SQL Server 2012. Please choose the version to match your SQL Server version, or you can install multiple versions and use them side by side if you have more than one version of SQL Server installed. Data Generator Source Adapter for SQL Server 2005 Data Generator Source Adapter for SQL Server 2008 Data Generator Source Adapter for SQL Server 2012 Version History SQL Server 2012 Version 3.0.0.30 - SQL Server 2012 release. Includes upgrade support for both 2005 and 2008 packages to 2012. (5 Jun 2012) SQL Server 2008 Version 2.0.0.29 - SQL Server 2008 February 2008 CTP. Includes support for upgrade of 2005 packages. Simplified user interface. (4 Mar 2008) Version 2.0.0.27 - SQL Server 2008 November 2007 CTP. String columns will now use the default system code page. Previously string columns always used 1252. (15 Feb 2008) SQL Server 2005 Version 1.1.0.23 - SQL Server 2005 RTM Refresh. SP1 Compatibility Testing. (12 Jun 2006) Version 1.0.0.0 - SQL Server 2005 IDW 16 Sept CTP. Public release. (6 Oct 2005)

    Read the article

  • Tales of a corrupt SQL log

    - by guybarrette
    Warning: I’m a simple dev, not an all powerful DBA with godly powers. This morning, one of my sites was down and DNN reported a problem with the database.  A quick series of tests revealed that the culprit was a corrupted log file. Easy fix I said, I have daily backups so it’s just a mater of restoring a good copy of the database and log files.  Well, I found out that’s not exactly true.  You see, for this database, I have daily file backups and these are not database backups created by SQL Server. So I restored a set of files from a couple of days ago, stopped the SQL service, copied the files over the bad ones, restarted the service only to find out that SQL doesn’t like when you do that.  It suspects something fishy and marks the database as suspect.  A database marked as suspect can’t be accessed at all.  So now what? I searched throughout the tubes of the InterWeb and found that you can restore from a corrupted log file by creating a new database with the same name as the defective one, then copy the restored database file (the one with data) over the newly created one.  Sweet!  But you still end up with SQL marking the database as suspect but at least, the newly created log is OK.  Well not true, it’s not corrupted but the lack of data makes it not OK for SQL so you need to rebuild the log.  How can you do that when SQL blocks any action the database?  First, you need to change the database status from suspect to emergency.  Then you need to set the database for single access only.  After that, you need to repair the log with DBCC and do the DBA dance.  If you dance long enough, SQL should repair the log file.  Now you need to set the access back to multi user.  Here’s the T-SQL script: use master GO EXEC sp_resetstatus 'MyDatabase' ALTER DATABASE MyDatabase SET EMERGENCY Alter database MyDatabase set Single_User DBCC checkdb('MyDatabase') ALTER DATABASE MyDatabase SET SINGLE_USER WITH ROLLBACK IMMEDIATE DBCC CheckDB ('MyDatabase', REPAIR_ALLOW_DATA_LOSS) ALTER DATABASE MyDatabase SET MULTI_USER So I guess that I would have been a lot easier to restore a SQL backup.  I can’t really say but the InterWeb seems to say so.  Anyway, lessons learned: Vive la différence: File backups are different then SQL backups. Don’t touch me: SQL doesn’t like when you restore a file over a corrupted one. The more the merrier: You should do both SQL and file backups. WTF?: The InterWeb provides you with dozens of way to deal with the problem but many are SQL 2000 or SQL 2005 only, many are confusing and many are written in strange dialects only DBAs understand. var addthis_pub="guybarrette";

    Read the article

  • How can I dynamically load the correct sprite from a sprite sheet?

    - by Leonard Challis
    I am making a simple card game in unity. The game is based on a standard 52-card pack, with identical backs for unique faces. In my particular game different cards are worth different values and have various special abilities. The game will have 52 cards on the table (on the draw position or in the face-down deck or in someone's hand) at all times, so this number won't change. I thought that making a Card prefab and instantiating 52 of these manually would be a bad idea. Even doing it in code, I thought, would be a bit OTT, and that I should just instantiate visual cards when they are face-up to the player. I have a sprite sheet of the 52 cards and the back, which is imported as a Sprite in multiple mode, sliced in to a grid containing all the cards needed to play the game. The problem I now face is that, through my GameController script I want to generate a shuffled pack of cards, deal some to each player and then show those cards to a player. However, I am not sure of the best way, or even if it's possible, to do this dynamically with the sprite sheets as they are. For instance if I have the following: private CardRank rank; private CardSuite suite; private void Start() { this.rank = CardRank.Ace; this.suite = CardSuite.Spades; } This class would be instantiated by the game manager. I would have 52 of these in code. Whenever I have to visually show a card in the scene, I would use a card prefab, which is essentially a game object with a SpriteRenderer on it. I would need to dynamically load the correct sprite for this object from the spritesheet. The sliced sprites from the sprite sheet actually have names in the format AS (Ace of Spades), 7H (Seven of Hearts), etc - though this was a manual thing I did myself of course. I have also tried various alternative solutions, including creating animations, having separate sprites not in a spritesheet and having an array of available sprites in an array with a specific index for each card, but none seem as elegant as trying to load the correct sprite at runtime, as I'm trying to. So, how do I load a specific sprite from a spritesheet at runtime? I'm open to suggestions, even those that make me think differently about how to approach the problem.

    Read the article

  • OWB 11gR2 &ndash; OMB and File Editing

    - by David Allan
    Here we will see how we can use the IDE for editing OMB scripts. The 11gR2 release is based on the common Oracle platform IDE used also by JDeveloper. It comes with a bunch of standard behavior for editing and rendering code. One of the lesser known things is that if you drop a text file into OWB you can edit it. So you can drop your tcl scripts right into OWB and edit in-place, and don’t need another IDE like Eclipse just for this task. Cool, so you have the file here. There may be no line numbers, you can toggle line numbers on by right clicking in the gutter. If we edit the file within the OWB IDE, the save is a little different from normal. OWB doesn’t normally manipulate files so things like ctrl-s to save, saves the OWB objects, but if you edit a file the closing of the file will ask if you want to save it – check it out. Now we enter the realm of ‘he who dares’…. Note the IDE doesn’t know about tcl files out of the box, so you see above there is no syntax highlighting. The code is identified by the extension… .java is java, .html is HTML etc. With OWB, the OMB scripts are tcl, we usually have .tcl extension on these files. One of the things we can do to trick up the syntax highlighting is to simply rename the file to have a .java suffix, then all of a sudden we get syntax highlighting, see the illustration here where side by side we see a the file with a .java extension and a .tcl extension. Not ideal pretending to be .java but gets us a way to having something more useful than notepad. We can then change the syntax highlighting such that we get Eclipse like highlighting within the IDE from the Tools Preferences option; You then get the Eclipse like rendering albeit using a little tweak on the file names… Might be useful if you are doing any kind of heavy duty OMB script development and just want a single IDE. The OMBPlus panel is then at hand for executing and testing it out.

    Read the article

  • Is anyone doing "real" TDD with Visual-C++, and if yes, how do they do it?

    - by Martin
    Test Driven Development implies writing the test before the code and following a certain cycle: Write Test Check Test (run) Write Production Code Check Test (run) Clean up Production Code Check test (run) As far as I'm concerned, this is only possible if your development solution allows you to very quickly switch between the production and test code, and to execute the test for a certain production code part extremely quickly. Now, while there exist lots of Unit Testing Frameworks for C++ (I'm using Bost.Test atm.) it does seem that there doesn't really exist any decent (for native C++) Visual Studio (Plugin) solution that makes the TDD cycle bearable regardless of framework used. "Bearable" means that it's a one-click action to run a test for a certain cpp file without having to manually set up a separate testing project etc. "Bearable" also means that a simple test starts (linking!) and runs very quickly. So, what tools (plugins) and techniques are out there that make the TDD cycle possible for native C++ development with Visual Studio? Note: I'm fine with free or "commercial" tools. Please: No framework recommendations. (Unless the framework has a dedicated Visual Studio plugin and you want to recommend the plugin.) Edit Note: The answers so far have provided links on how to integrate a Unit Testing framework into Visual Studio. The resources more or less describe how to get the UT framework to compile and get your first Tests running. This is not what this question is about. I'm of the opinion that to really work productively, having the Unit Tests in a manually maintained(!), separate vcproj from your production classes will add so much overhead that TDD "isn't possible". As far as I am aware, you do not add extra "projects" to a Java or C# thing to enable Unit Tests and TDD, and for a good reason. This should be possible with C++ given the right tools, but it seems (this question is about) that there are very little tools for TDD/C++/VS. Googling around, I've found one tool, VisualAssert, that seems to aim in the right direction. However, afaiks, it doesn't seem to be in widespread use (compared to CppUnit, Boost.Test etc.). Edit: I would like to add a comment to the context for this question. I think it does a good summary of outlining (part of) the problem: (comment by Billy ONeal) Visual Studio does not use "build scripts" that are reasonably editable by the user. One project produces one binary. Moreover, Java has the property that Java never builds a complete binary -- the binary you build is just a ZIP of the class files. Therefore it's possible to compile separately then JAR together manually (using e.g. 7z). C++ and C# both actually link their binaries, so generally speaking you can't write a script like that. The closest you can get is to compile everything separately and then do two linkings (one for production, one for testing).

    Read the article

  • RIM's current BB7 developer toolset is a joke

    - by mbrit
    tl;dr - RIM's current developer toolset is not fit for purpose.Background to this is that I'm currently working on a PhoneGap/Cordova project for a client that has to run on BlackBerry. The tooling is so ridiculous to use that even though I had a gentle dig at them in a Guardian piece it's worth having a more full-on attack.At the moment, RIM's pitch is that apps are built for the current BBOS7 devices using WebWorks. This is an HTML-based toolset. Essentially a browser is spun up in a native app container and your app is powered by JavaScript. Specific JavaScript libraries exist that thunk down to native capabilities no the device. I happen to use PhoneCap/Cordova in combination with this.The tooling is non-existent. I'm using TextMate, Ant, and Terminal to develop the app. There's no "console.log" output, and no debugging. The only way to instrument the app is to put "alert" calls in your code.Apart from the fact that that's *not* fine in 2012, how about this… every time you deploy a new app to the device, the device has to reboot. This process takes six minutes on a relatively modern BlackBerry device. How about this as well - in order to get a file into the package it has to be signed. My small app over here has 100 different files (75 or so generated). Signing doesn't happen locally, it happens on RIM's servers in Waterloo. Thus whenever you deploy the app you have this utility have to call RIM's servers 100 times. More to the point, sometimes during the day these servers have "micro-downtime" moments where they're unreachable for five or ten minutes, normally two or three times a day. Oh yes, you'll also get an email sent to you per signing on success or failure. 100 inbound emails, per deployment.(I started this post at the beginning of one of these cycles, by the way. That's how long it takes to build and deploy *once*. By the way, the change I made didn't work.)To clarify:* Change the script,* Build it using Ant,* Ant will spin up a Java app that talks to RIM's servers to sign it.* Receive 100 emails, assuming the server is up.* App deployed - takes about 30 seconds.* BlackBerry device restarts - takes about six minutes.* Find and open the app. Go through security prompts.* Test the app, with no "console.log" output and no debugger."Why not use the simulator?" I hear you ask. Well, apart from the fact that the simulator refused to reach any network service over HTTPS that I happen to own? (Some people suggest changing DNS settings for this known issue.) Admittedly, the simulator does show you console.log, but you still have the "six minute" restart issue on the simulator.Developers will understand this problem. Breaking concentration for six-plus minutes every time you want to deploy an app turns developing into a nightmare. Combining that with no worthy debugging tools turns the toolset into a joke.

    Read the article

  • No GRUB Screen or recovery mode on Boot after 12.04 Upgrade

    - by Nick
    I tried the live boot CD and boot-repair, also loaded the Desktop install CD, and it looks like all partitions check out OK. However, when I try to boot Linux (the only bootable partition on the computer) I get a blank screen. Every so often the screen give me something akin to: Assuming write through cache Asking for cache data failed it appears to start booting, then hangs. Ctrl+Alt+Delete shuts down the machine The last message during boot is "STarting TiMidity++ ALSA midi emulation... [OK]" I used boot-repair to generate a boot info report. One thing looks odd to me- it reports a missing core.img on /dev/sda1. Here is the full info: Boot Info Script 0.61.full + Boot-Repair extra info [Boot-Info August 2nd 2012] ============================= Boot Info Summary: =============================== = Grub2 (v1.99) is installed in the MBR of /dev/sda and looks at sector 1 of the same hard drive for core.img. core.img is at this location and looks for (,msdos1)/boot/grub on this drive. = Windows is installed in the MBR of /dev/sdb. sda1: __________________________________________ File system: ext4 Boot sector type: Grub2 (v1.99) Boot sector info: Grub2 (v1.99) is installed in the boot sector of sda1 and looks at sector 18406911 of the same hard drive for core.img, but core.img can not be found at this location. Operating System: Ubuntu 12.04.1 LTS Boot files: /boot/grub/grub.cfg /etc/fstab /boot/extlinux/extlinux.conf /boot/grub/core.img sda2: __________________________________________ File system: Extended Partition Boot sector type: - Boot sector info: sda5: __________________________________________ File system: swap Boot sector type: - Boot sector info: sdb1: __________________________________________ File system: ntfs Boot sector type: Windows XP: NTFS Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files: ============================ Drive/Partition Info: ============================= Drive: sda _______________________________________ Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders, total 312581808 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start Sector End Sector # of Sectors Id System /dev/sda1 * 63 307,339,514 307,339,452 83 Linux /dev/sda2 307,339,515 312,576,704 5,237,190 5 Extended /dev/sda5 307,339,578 312,576,704 5,237,127 82 Linux swap / Solaris Drive: sdb _______________________________________ Disk /dev/sdb: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start Sector End Sector # of Sectors Id System /dev/sdb1 2,048 625,142,447 625,140,400 7 NTFS / exFAT / HPFS "blkid" output: ____________________________________ Device UUID TYPE LABEL /dev/loop0 squashfs /dev/sda1 11b4d633-7863-40b2-a6ca-da5f82c3ad0b ext4 /dev/sda5 cb8d65f4-8cf9-4088-b804-e3dea2151033 swap /dev/sdb1 349E7C109E7BC8BE ntfs Personal1 ================================ Mount points: ================================= Device Mount_Point Type Options /dev/sdb1 /media/Personal1 fuseblk (rw,nosuid,nodev,allow_other,blksize=4096,default_permissions) /dev/sr0 /live/image iso9660 (ro,noatime) ...(a bunch of config file info- let me know if anyone wants to see it!) But usually I just get "Cannot Display This Video Mode", which I know means the video output is not usable by the monitor. I'm looking for a way to get into a recovery mode.I'd really like to avoid wiping the drive. Any thoughts?

    Read the article

  • My Automated NuGet Workflow

    - by Wes McClure
    When we develop libraries (whether internal or public), it helps to have a rapid ability to make changes and test them in a consuming application. Building Setup the library with automatic versioning and a nuspec Setup library assembly version to auto increment build and revision AssemblyInfo –> [assembly: AssemblyVersion("1.0.*")] This autoincrements build and revision based on time of build Major & Minor Major should be changed when you have breaking changes Minor should be changed once you have a solid new release During development I don’t increment these Create a nuspec, version this with the code nuspec - set version to <version>$version$</version> This uses the assembly’s version, which is auto-incrementing Make changes to code Run automated build (ruby/rake) run “rake nuget” nuget task builds nuget package and copies it to a local nuget feed I use an environment variable to point at this so I can change it on a machine level! The nuget command below assumes a nuspec is checked in called Library.nuspec next to the csproj file $projectSolution = 'src\\Library.sln' $nugetFeedPath = ENV["NuGetDevFeed"] msbuild :build => [:clean] do |msb| msb.properties :configuration => :Release msb.targets :Build msb.solution = $projectSolution end task :nuget => [:build] do sh "nuget pack src\\Library\\Library.csproj /OutputDirectory " + $nugetFeedPath end Setup the local nuget feed as a nuget package source (this is only required once per machine) Go to the consuming project Update the package Update-Package Library or Install-Package TLDR change library code run “rake nuget” run “Update-Package library” in the consuming application build/test! If you manually execute any of this process, especially copying files, you will find it a burden to develop the library and will find yourself dreading it, and even worse, making changes downstream instead of updating the shared library for everyone’s sake. Publishing Once you have a set of changes that you want to release, consider versioning and possibly increment the minor version if needed. Pick the package out of your local feed, and copy it to a public / shared feed! I have a script to do this where I can drop the package on a batch file Replace apikey with your nuget feed's apikey Take out the confirm(s) if you don't want them @ECHO off echo Upload %1? set /P anykey="Hit enter to continue " nuget push %1 apikey set /P anykey="Done " Note: helps to prune all the unnecessary versions during testing from your local feed once you are done and ready to publish TLDR consider version number run command to copy to public feed

    Read the article

  • How do I stop icons appearing on the desktop under conky?

    - by Seamus
    When I download something to my desktop, or insert a CD or flash drive, the icon appears on my desktop. When I have conky running, the icon sometimes appears in the top right corner, underneath conky; where I can't see it. How do I stop this happening? My .conkyrc is pasted below. I didn't write it all myself, so I'm not entirely sure what I need to change, or what parts are relevant for this particular question... # UBUNTU-CONKY # A comprehensive conky script, configured for use on # Ubuntu / Debian Gnome, without the need for any external scripts. # # Based on conky-jc and the default .conkyrc. # INCLUDES: # - tail of /var/log/messages # - netstat shows number of connections from your computer and application/PID making it. Kill spyware! # # -- Pengo # # Create own window instead of using desktop (required in nautilus) own_window yes own_window_type override own_window_transparent yes own_window_hints undecorated,below,sticky,skip_taskbar,skip_pager # Use double buffering (reduces flicker, may not work for everyone) double_buffer yes # fiddle with window use_spacer right # Use Xft? use_xft yes xftfont DejaVu Sans:size=8 xftalpha 0.8 text_buffer_size 2048 # Update interval in seconds update_interval 3.0 # Minimum size of text area # minimum_size 250 5 # Draw shades? draw_shades no # Text stuff draw_outline no # amplifies text if yes draw_borders no uppercase no # set to yes if you want all text to be in uppercase # Stippled borders? stippled_borders 3 # border margins border_margin 9 # border width border_width 10 # Default colors and also border colors, grey90 == #e5e5e5 default_color grey own_window_colour brown own_window_transparent yes # Text alignment, other possible values are commented #alignment top_left alignment top_right #alignment bottom_left #alignment bottom_right # Gap between borders of screen and text gap_x 10 gap_y 20 # stuff after 'TEXT' will be formatted on screen TEXT $color ${color orange}SYSTEM ${hr 2}$color $nodename $sysname $kernel on $machine ${color orange}CPU ${hr 2}$color ${freq}MHz Load: ${loadavg} Temp: ${acpitemp} $cpubar ${cpugraph 000000 ffffff} NAME ${goto 150}PID ${goto 200}CPU% ${goto 250}MEM% ${top name 1} ${goto 150}${top pid 1} ${goto 200}${top cpu 1} ${goto 250}${top mem 1} ${top name 2} ${goto 150}${top pid 2} ${goto 200}${top cpu 2} ${goto 250}${top mem 2} ${top name 3} ${goto 150}${top pid 3} ${goto 200}${top cpu 3} ${goto 250}${top mem 3} ${top name 4} ${goto 150}${top pid 4} ${goto 200}${top cpu 4} ${goto 250}${top mem 4} ${color orange}MEMORY / DISK ${hr 2}$color RAM: $memperc% ${membar 6}$color Swap: $swapperc% ${swapbar 6}$color Home: ${fs_free_perc /home}% ${fs_bar 6 /}$color Free Space: ${fs_free /home} ${color orange}NETWORK (${addr eth0}) ${hr 2}$color Down: $color${downspeed eth0} k/s ${alignr}Up: ${upspeed eth0} k/s ${downspeedgraph eth0 25,140 000000 ff0000} ${alignr}${upspeedgraph eth0 25,140 000000 00ff00}$color Total: ${totaldown eth0} ${alignr}Total: ${totalup eth0} ${execi 30 netstat -ept | grep ESTAB | awk '{print $9}' | cut -d: -f1 | sort | uniq -c | sort -nr} ${color orange}WIRELESS (${addr wlan0}) ${hr 2}$color Down: $color${downspeed wlan0} k/s ${alignr}Up: ${upspeed wlan0} k/s ${downspeedgraph wlan0 25,140 000000 ff0000} ${alignr}${upspeedgraph wlan0 25,140 000000 00ff00}$color Total: ${totaldown wlan0} ${alignr}Total: ${totalup wlan0} ${execi 30 netstat -ept | grep ESTAB | awk '{print $9}' | cut -d: -f1 | sort | uniq -c | sort -nr}

    Read the article

  • Blender to 3ds max to cal3d format

    - by Kaliber64
    There are quite a few questions on cal3d but they are old and don't apply anymore. In Blender(must be 2.49a for python script to work!!!): I have a scene with 7 meshes, 1 armature, 10 bones. I tried going to one mesh to simplify it but doesn't change anything. I found a small blend file that was used for cal3d and it exported just fine. So I tried to copy it's setup with no success. EDIT*8/13/2012 In the last week here is what I have found so far. I made the mesh in the newest blender(2.62?) and exported it to import it in the old one(2.49a). Did an animation in the old one because importing new blend files to old blenders, its just said it would lose keyframe data and all was good. And then you get the last problem of it not exporting meshes. BUT I found that meshes made in the old one export regardless. I can't find any that won't export. So if I used the old blender to remake my model I could get it to export :) At this point I found a modified release of cal3d (because the most core model variable would not initiate as I made a really small test subject in old blender instead of remaking my big one which took 4 hours.) which fixes the morph objects and adds what cal3d left off with. Under their license they have to release the modification but it has no documentation so I have to figure it out on my own. Its mostly the same. But with this lib it came with a 3ds max exporter. My question now is how do I transfer armature and mesh information from blender to 3ds max in order to export into cal3d format. Every time I try the models are see through and small and there are no bones. The formats I have tried to import are .3ds .obj(mesh only) and COLLADA. In all of them the mesh is invisible and no bones. It says the default texture is on so I should be able to see it. All the vertices are present I found a vertex highlighter so I can see those. If any of this is confusing let me know so I can clear it up. Its late .<=sleep.

    Read the article

  • Web Development Goes Pre-Visual InterDev

    - by Ken Cox [MVP]
    As a longtime and hardcore ASP.NET webforms developer, I’m finding the new client-side development world a bit of a grind.  I love learning new technologies, but I can’t help feeling we’ve regressed and lost our old RAD advantage as we move heavy lifting to the client. For my latest project, I’m using Telerik’s KendoUI in Visual Studio 2012. To say I feel clumsy writing this much JavaScript is an understatement. It seems like the only safe way to ‘write’ this code is by copying a working snippet from someone else and pasting it into my HTML page.  For me, JavaScript has largely been for small UI tasks like client-side validation and a bit of AJAX – and often emitted by a server-side control. I find myself today lost in nests of curly braces that Ctrl+K, Ctrl+D doesn’t seem to understand that well either. IntelliSense, my old syntax saviour, doesn’t seem to have kept up with this cobweb of code either. Code completion? Not seeing it. As I fumbled about this evening, I thought about how web development rocketed forward when Microsoft introduced Visual InterDev. Its Design-Time Controls (DTCs) changed the way we created sites. All the iterations of Visual Studio have enhanced that server-side experience where you let a tool write the bulk of the code and manually finesse it from there. What happened? Why am I typing  properties and values (especially default values!) into VS 2012 to get a client-side grid on a page? Where are the drag and drop objects that traditionally provided 70 percent of the mark-up and configuration?  Did we forget how to write Property Pages where you enter a value and the correct syntax appears magically in the source code? To me, the tooling was looking the other way as the scene shifted from server-side code to nimble client-side script. It’ll have to catch up. Although JavaScript is the lingua franca of web browsers, the language is unwieldy, tough to maintain, and messy to debug. If a .NET JIT compiler can turn our VB, F#, and C# source code into an Intermediate Language that executes on a computer, I don’t see why there can’t be a client-side compiler that turns a .NET language into JavaScript that browsers can consume.

    Read the article

  • Error while removing the new kernel 2.6.37

    - by Tarek
    Hi! I tried to install the new kernel but something went wrong and I'm trying to remove it now. The error massege is: mhd@Tarek-Laptop:~$ sudo apt-get install -f Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be REMOVED: linux-image-2.6.37-020637-generic 0 upgraded, 0 newly installed, 1 to remove and 9 not upgraded. 1 not fully installed or removed. After this operation, 111MB disk space will be freed. Do you want to continue [Y/n]? y (Reading database ... 188780 files and directories currently installed.) Removing linux-image-2.6.37-020637-generic ... Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 2.6.37-020637-generic /boot/vmlinuz-2.6.37-020637-generic run-parts: executing /etc/kernel/postrm.d/zz-update-grub 2.6.37-020637-generic /boot/vmlinuz-2.6.37-020637-generic /etc/default/grub: 33: Syntax error: EOF in backquote substitution run-parts: /etc/kernel/postrm.d/zz-update-grub exited with return code 2 Failed to process /etc/kernel/postrm.d at /var/lib/dpkg/info/linux-image-2.6.37-020637-generic.postrm line 328. dpkg: error processing linux-image-2.6.37-020637-generic (--remove): subprocess installed post-removal script returned error exit status 1 Errors were encountered while processing: linux-image-2.6.37-020637-generic E: Sub-process /usr/bin/dpkg returned an error code (1) The previous unsloved error is on this bug. This is my grub configuration file: # If you change this file, run 'update-grub' afterwards to update # /boot/grub/grub.cfg. GRUB_DEFAULT=0 #GRUB_HIDDEN_TIMEOUT=0 GRUB_HIDDEN_TIMEOUT_QUIET=true GRUB_TIMEOUT=10 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` RUB_CMDLINE_LINUX_DEFAULT="quiet splash nomodeset video=uvesafb:mode_option=1024x768-24,mtrr=3,scroll=ywrap" video=uvesafb:mode_option=>>1024x768-24<<,mtrr=3,scroll=ywrap" GRUB_CMDLINE_LINUX=" vga=792 splash" # Uncomment to enable BadRAM filtering, modify to suit your needs # This works with Linux (no patch required) and with any kernel that obtains # the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...) #GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef" # Uncomment to disable graphical terminal (grub-pc only) #GRUB_TERMINAL=console # The resolution used on graphical terminal # note that you can use only modes which your graphic card supports via VBE # you can see them in real GRUB with the command `vbeinfo' GRUB_GFXMODE=1024x768-24 # Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux #GRUB_DISABLE_LINUX_UUID=true # Uncomment to disable generation of recovery mode menu entries #GRUB_DISABLE_LINUX_RECOVERY="true" # Uncomment to get a beep at grub start #GRUB_INIT_TUNE="480 440 1" thank you for answering.

    Read the article

  • Updating My Online Boggle Solver Using jQuery Templates and WCF

    With WebForms, each ASP.NET page's rendered output includes a <form> element that performs a postback to the same page whenever a Button control within the form is clicked, or whenever the user modifies a control whose AutoPostBack property is set to True. This model simplifies web page development, but carries with it some costs - namely, the large amount of data exchanged between the client and the server during a postback. On postback the browser sends the values of all of its form fields (including hidden ones, like view state, which may be quite large) to the server; the server then sends back the entire contents of the web page. While there are some scenarios where this amount of information needs to be exchanged, in many cases the user has performed some action that requires far less information to be exchanged. With a little bit of forethought and code we can have the browser and server exchange much less data, which leads to more responsive web pages and an improved user experience. Over the past several weeks I've been writing an article series on accessing server-side data from client script. Rather than rely solely on forms and postbacks, many websites use JavaScript code to asynchronously communicate with the server in response to the page loading or some other user action. The server, upon receiving the JavaScript-initiated request, returns just the data needed by the browser, which the browser then seamlessly integrates into the web page. There are a variety of technologies and techniques that can be employed to provide both the needed server- and client-side functionality. Last week's article, Using WCF Services with jQuery and the ASP.NET Ajax Library, explored using the Windows Communication Foundation, or WCF, to serve data from the web server and showed how to consume such a service using both the ASP.NET Ajax Library and jQuery. In a previous 4Guys article, Creating an Online Boggle Solver, I built an application to find all solutions in a game of Boggle. (Boggle is a word game trademarked by Parker Brothers and Hasbro that involves several players trying to find as many words as they can in a 4x4 grid of letters.) This article takes the lessons learned in Using WCF Services with jQuery and the ASP.NET Ajax Library and uses them to update the user interface for my online Boggle solver, replacing the existing WebForms-based user interface with a more modern and responsive interface. I also used jQuery Templates, a JavaScript-based templating library that is useful for displaying the results from a server-side service. Read on to learn more! Read More >

    Read the article

  • How to export 3D models that consist of several parts (eg. turret on a tank)?

    - by Will
    What are the standard alternatives for the mechanics of attaching turrets and such to 3D models for use in-game? I don't mean the logic, but rather the graphics aspects. My naive approach is to extend the MD2-like format that I'm using (blender-exported using a script) to include a new set of properties for a mesh that: is anchored in another 'parent' mesh. The anchor is a point and normal in the parent mesh and a point and normal in the child mesh; these will always be colinear, giving the child rotation but not translation relative to the parent point. has a normal that is aligned with a 'target'. Classically this target is the enemy that is being engaged, but it might be some other vector e.g. 'the wind' (for sails and flags (and smoke, which is a particle system but the same principle applies)) or 'upwards' (e.g. so bodies of riders bend properly when riding a horse up an incline etc). that the anchor and target alignments have maximum and minimum and a speed coeff. there is game logic for multiple turrets and on a model and deciding which engages which enemy. 'primary' and 'secondary' or 'target0' ... 'targetN' or some such annotation will be there. So to illustrate, a classic tank would be made from three meshes; a main body mesh, a turret mesh that is anchored to the top of the main body so it can spin only horizontally and a barrel mesh that is anchored to the front of the turret and can only move vertically within some bounds. And there might be a forth flag mesh on top of the turret that is aligned with 'wind' where wind is a function the engine solves that merges environment's wind angle with angle the vehicle is travelling in an velocity, or something fancy. This gives each mesh one degree of freedom relative to its parent. Things with multiple degrees of freedom can be modelled by zero-vertex connecting meshes perhaps? This is where I think the approach I outlined begins to feel inelegant, yet perhaps its still a workable system? This is why I want to know how it is done in professional games ;) Are there better approaches? Are there formats that already include this information? Is this routine?

    Read the article

  • OpenWorld in Small Bites

    - by Kathryn Perry
    Fifty thousand attendees -- that's bigger than the cities some of us live in. Monday morning it took 20 minutes to get from Hall D in Moscone North to a conference room in Moscone South -- the crowds were crushing! A great start to a great week! Larry is as big a name as ever on the program schedule and on the Moscone stage. People were packed in Hall D and clustered around every big screen TV. He stayed on script as he laid out Oracle's SaaS, PaaS, and IaaS strategies. Every seat in Chris Leone's Fusion Apps Cloud Overview was filled on Monday morning. Oracle employees who wanted to get in were turned away. And the same thing happened in the repeat session on Wednesday. Our newest suite of apps is hot! Speaking of hot, the weather was made to order. Then it turned very San Francisco-like on Wednesday afternoon. Downright cold for those who trusted SF temps to hold in the 80's. Who did you follow on Twitter during the conference? So many voices, opinions, and convos! Great combo of social media and sharp minds. Be sure to follow @larryellison, @stevenrmiranda, and @Oracle for updates and MyPOVs. Keywords for the Apps customers at the conference were cloud, mobile, and social. Every day, every session, every speaker. Wednesday afternoon, 4 pm at the Four Seasons hotel. A large roomful of analysts and influencers firing questions at a panel of eight Fusion customers. Steve Miranda moderating. Good energy and a great exchange of information and confidence. Word on the street is that OpenWorld has outgrown San Francisco -- but moving it seems unthinkable. The city isn't just a backdrop for an industry conference - it's a headliner right up there with Larry Ellison and Pearl Jam. As you can imagine, electrical outlets were in high demand at every venue. The most popular hotels and bars near Moscone designed their interiors around accessible electrical power strips. People are plenty willing to buy a drink while they grab a charge. Wednesday afternoon, 4 pm at the Four Seasons hotel. A large roomful of analysts and influencers firing questions at a panel of eight Fusion customers. Steve Miranda moderating. Good energy and a great exchange of information and confidence. Treasure Island in the dark. Eddy Vedder has an amazing voice! And Kings of Leon over delivered on people's expectations. It was cold. It was windy. It was very fun. One analyst said it's the best customer appreciation party in the industry. 

    Read the article

  • .Net Application & Database Modularity/Reuse

    - by Martaver
    I'm looking for some guidance on how to architect an app with regards to modularity, separation of concerns and re-usability. I'm working on an application (ASP.Net, C#) that has distinctly generic chunks of functionality, that I'd love to be able to lift out, all layers, into re-usable components. This means the module handles the database schema, data access, API, everything so that the next time I want to use it I can just register the module and hook into it. Developing modules of re-usable functionality is a no-brainer, but what is really confusing me is what to do when it comes to handling a core re-usable database schema that serves the module's functionality. In an ideal world, I would register a module and it would ensure that the associated database schema exists in the DB. I would code on the assumption that the tables exist, calling the module's functionality through the DLL, agnostic of the database layer. Kind of like Enterprise Library's Caching/Logging Application Block, which can create a DB schema in the target DB to use as a data store. My Questions is: What do you think is the best way to achieve this, firstly, in terms design architecture, and secondly solution structure. What patterns/frameworks do you know that exist & support this kind of thing? My thoughts so far: I mostly use Entity Framework and SQL Server DB Projects. I thought about a 'black box' approach to modules of functionality. I could use use a code-first approach in EF4, and use the ObjectContext to create a database when the module is initialized. However this means that all of the entities that my module encapsulates would be disconnected from the rest of the application because they belonged to an abstracted ObjectContext. Further - Creating appropriate indexes and references between domain entities and the module's entities would be impossible to do practically. I've thought of adopting Enterprise Library and creating my own Application Blocks. I'm not sure how this would play nice with Entity Framework (if at all) though. I like the idea of building on proven patterns & practices to encapsulate established, reusable functionality. I thought of abandoning Entity Framework for the Module, and just creating a separate DB schema for the module with its own set of stored procedures & ADO.Net. Then deploying the script at run-time if interrogation shows that it doesn't exist. But once again, for application developing outside of the application, I would want to use Entity Framework and I would have to use the module separately, disconnected from the domain ObjectContext. Has anyone had experience developing these sorts of full-stack modules? What advice can you offer? Am I biting off more than I can chew?

    Read the article

  • Unable to configure/setup 5.1 audio with 12.04

    - by Vipin Vinayan
    I am kinda new to Ubuntu as well. I have been having this issue with audio for quite sometime now. Initially, when I installed version 11.10 (I guess), I was able to use my 5.1 speakers without any issues. If my memory serves me right, it was after an update that the 5.1 audio stopped working and the video resolution would not get saved. I temporarily fixed the resolution issue by creating a start-up shell script that would update the resolution and load it. But the issue with audio has been going on for quite sometime now. Even though I have option for 5.1, only two speakers seem to be working. I thought an upgrade should fix the issue and so upgraded the OS to version 12.04. I also tried uninstalling alsa and pulse audio, reinstalling them, changing the /etc/pulse/daemon.conf channels from 2 to 6. I have also tried installing pavucontrol but nothing seems to have worked and the issue still persists. Is there anything else you could suggest? The lspci log on my computer is as follows 00:00.0 Host bridge: Intel Corporation 82G33/G31/P35/P31 Express DRAM Controller (rev 10) 00:01.0 PCI bridge: Intel Corporation 82G33/G31/P35/P31 Express PCI Express Root Port (rev 10) 00:02.0 VGA compatible controller: Intel Corporation 82G33/G31 Express Integrated Graphics Controller (rev 10) 00:1b.0 Audio device: Intel Corporation N10/ICH 7 Family High Definition Audio Controller (rev 01) 00:1c.0 PCI bridge: Intel Corporation N10/ICH 7 Family PCI Express Port 1 (rev 01) 00:1c.1 PCI bridge: Intel Corporation N10/ICH 7 Family PCI Express Port 2 (rev 01) 00:1d.0 USB controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #1 (rev 01) 00:1d.1 USB controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #2 (rev 01) 00:1d.2 USB controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #3 (rev 01) 00:1d.3 USB controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #4 (rev 01) 00:1d.7 USB controller: Intel Corporation N10/ICH 7 Family USB2 EHCI Controller (rev 01) 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev e1) 00:1f.0 ISA bridge: Intel Corporation 82801GB/GR (ICH7 Family) LPC Interface Bridge (rev 01) 00:1f.1 IDE interface: Intel Corporation 82801G (ICH7 Family) IDE Controller (rev 01) 00:1f.2 IDE interface: Intel Corporation N10/ICH7 Family SATA Controller [IDE mode] (rev 01) 00:1f.3 SMBus: Intel Corporation N10/ICH 7 Family SMBus Controller (rev 01) 03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 01) Would really appreciate a response that will assist me in resolving my issue. Thanks in advance Vipin

    Read the article

  • SQL SERVER – Fix: Error: 147 An aggregate may not appear in the WHERE clause unless it is in a subquery contained in a HAVING clause or a select list, and the column being aggregated is an outer reference

    - by pinaldave
    Everybody was beginner once and I always like to get involved in the questions from beginners. There is a big difference between the question from beginner and question from advanced user. I have noticed that if an advanced user gets an error, they usually need just a small hint to resolve the problem. However, when a beginner gets error he sometimes sits on the error for a long time as he/she has no idea about how to solve the problem as well have no idea regarding what is the capability of the product. I recently received a very novice level question. When I received the problem I quickly see how the user was stuck. When I replied him with the solution, he wrote a long email explaining how he was not able to solve the problem. He thanked multiple times in the email. This whole thing inspired me to write this quick blog post. I have modified the user’s question to match the code with AdventureWorks as well simplified so it contains the core content which I wanted to discuss. Problem Statement: Find all the details of SalesOrderHeaders for the latest ShipDate. He comes up with following T-SQL Query: SELECT * FROM [Sales].[SalesOrderHeader] WHERE ShipDate = MAX(ShipDate) GO When he executed above script it gave him following error: Msg 147, Level 15, State 1, Line 3 An aggregate may not appear in the WHERE clause unless it is in a subquery contained in a HAVING clause or a select list, and the column being aggregated is an outer reference. He was not able to resolve this problem, even though the solution was given in the query description itself. Due to lack of experience he came up with another version of above query based on the error message. SELECT * FROM [Sales].[SalesOrderHeader] HAVING ShipDate = MAX(ShipDate) GO When he ran above query it produced another error. Msg 8121, Level 16, State 1, Line 3 Column ‘Sales.SalesOrderHeader.ShipDate’ is invalid in the HAVING clause because it is not contained in either an aggregate function or the GROUP BY clause. What he wanted actually was the SalesOrderHeader all the Sales shipped on the last day. Based on the problem statement what the right solution is as following, which does not generate error. SELECT * FROM [Sales].[SalesOrderHeader] WHERE ShipDate = (SELECT MAX(ShipDate) FROM [Sales].[SalesOrderHeader]) Well, that’s it! Very simple. With SQL Server there are always multiple solution to a single problem. Is there any other solution available to the problem stated? Please share in the comment. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Error Messages, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • How do you automate a SharePoint 2010 deployment?

    - by Enrique Lima
    In the last couple of months SharePoint traffic (consulting, training and speaking) has picked up.  And with that also the requests for deployments.  There are good, great, bad and really bad things around this. But that is for another topic.  However part of the good and great has been the fact of organizations wanting to do a proof of concept deployment (even when WSS or MOSS has been deployed). We can go through a session (Microsoft has the SDPS concept, SharePoint Deployment Planning Services) of discovering what the customer wants to achieve from their investment in the platform and then also proceed to model the solution that would fit their needs.  But it should not stop there.  The next step should be a POC (as many have requested) to test out. Now, on to the meat of this post.  How do I deploy?  While it is a good process to watch and see all of it take place, not many have the time to sit through that.  Even more so, when that has been part of the description of deploying the platform in the sessions mentioned above. I will, though, break it into a deployment for development purposes and a deployment of a farm. Two tools (or scripts) for those two different types of deployment. First, let me address the development environment.  Around the last week in October, Chris Johnson (SharePoint Product Team) announced a SharePoint Easy Setup for Developers.  The kit itself will assist you in installing SharePoint Server (in standalone mode), the tools that go around Visual Studio, Expression Studio and the Office 2010 tools. Here is the link to Chris’ post: http://blogs.msdn.com/b/cjohnson/archive/2010/10/28/announcing-sharepoint-easy-setup-for-developers.aspx The other scenario is the use of a script in assisting you through the deployment of a farm. Now, this is not to override planning.  It should highlight the need for planning even more.  How?  Having your service accounts planned, the structure of the sites and the scale of your deployment.  Enter AutoSPInstaller.  This is a CodePlex project, and the intent behind this is not only to automate the installation but to give some meaning and get some sense out of what goes on during a SharePoint deployment. How?  Take for example the creation of the databases, when we do the initial OOB deployment by using the wizard, more times than not, we leave the names as they are.  How is that a “bad thing”?  Let’s make it a better practice to rename those Databases, and have them take on a name that is not “GUID-ized”. Having a better naming convention will not hurt, on the other hand will allow for consistency. Here is the link to AutoSPInstaller’s site on CodePlex: http://autospinstaller.codeplex.com/

    Read the article

  • Update Errors in Xubuntu 12.10

    - by wil
    I updated by computer from 12.04 to 12.10 and after I finished updating when I turned on my computer I am unable to update my computer. I tried install a new copy of 13.04 but my cpu doesn't support pae. I have a IBM Thinkpad T42 with a 1.7 gigahertx Cpu. When updating through terminal This is the output. sudo apt-get upgrade: [sudo] password for wil: Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies: linux-image-extra-3.5.0-34-generic : Depends: linux-image-3.5.0-34-generic but it is not installed linux-image-generic : Depends: linux-image-3.5.0-34-generic but it is not installed E: Unmet dependencies. Try using -f. sudo apt-get upgrade -f: Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following NEW packages will be installed: linux-image-3.5.0-34-generic 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. 3 not fully installed or removed. Need to get 0 B/11.8 MB of archives. After this operation, 25.9 MB of additional disk space will be used. Do you want to continue [Y/n]? y (Reading database ... 191530 files and directories currently installed.) Unpacking linux-image-3.5.0-34-generic (from .../linux-image-3.5.0-34-generic_3.5.0-34.55_i386.deb) ... This kernel does not support a non-PAE CPU. dpkg: error processing /var/cache/apt/archives/linux-image-3.5.0-34-generic_3.5.0-34.55_i386.deb (--unpack): subprocess new pre-installation script returned error exit status 1 No apport report written because MaxReports is reached already Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 3.5.0-34-generic /boot/vmlinuz-3.5.0-34-generic run-parts: executing /etc/kernel/postrm.d/zz-update-grub 3.5.0-34-generic /boot/vmlinuz-3.5.0-34-generic Errors were encountered while processing: /var/cache/apt/archives/linux-image-3.5.0-34-generic_3.5.0-34.55_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) wil@wil-ThinkPad-T42:~/Desktop$

    Read the article

< Previous Page | 812 813 814 815 816 817 818 819 820 821 822 823  | Next Page >