Search Results

Search found 19975 results on 799 pages for 'disk queue length'.

Page 516/799 | < Previous Page | 512 513 514 515 516 517 518 519 520 521 522 523  | Next Page >

  • Game Over function is not working Starling

    - by aNgeLyN omar
    I've been following a tutorial over the web but it somehow did not show something about creating a game over function. I am new to the Starling framework and Actionscript so I'm kind of still trying to find a way to make it work. Here's the complete snippet of the code. package screens { import flash.geom.Rectangle; import flash.utils.getTimer; import events.NavigationEvent; import objects.GameBackground; import objects.Hero; import objects.Item; import objects.Obstacle; import starling.display.Button; import starling.display.Image; import starling.display.Sprite; import starling.events.Event; import starling.events.Touch; import starling.events.TouchEvent; import starling.text.TextField; import starling.utils.deg2rad; public class InGame extends Sprite { private var screenInGame:InGame; private var screenWelcome:Welcome; private var startButton:Button; private var playAgain:Button; private var bg:GameBackground; private var hero:Hero; private var timePrevious:Number; private var timeCurrent:Number; private var elapsed:Number; private var gameState:String; private var playerSpeed:Number = 0; private var hitObstacle:Number = 0; private const MIN_SPEED:Number = 650; private var scoreDistance:int; private var obstacleGapCount:int; private var gameArea:Rectangle; private var touch:Touch; private var touchX:Number; private var touchY:Number; private var obstaclesToAnimate:Vector.<Obstacle>; private var itemsToAnimate:Vector.<Item>; private var scoreText:TextField; private var remainingLives:TextField; private var gameOverText:TextField; private var iconSmall:Image; static private var lives:Number = 2; public function InGame() { super(); this.addEventListener(starling.events.Event.ADDED_TO_STAGE, onAddedToStage); } private function onAddedToStage(event:Event):void { this.removeEventListener(Event.ADDED_TO_STAGE, onAddedToStage); drawGame(); scoreText = new TextField(300, 100, "Score: 0", "MyFontName", 35, 0xD9D919, true); remainingLives = new TextField(600, 100, "Lives: " + lives +" X ", "MyFontName", 35, 0xD9D919, true); iconSmall = new Image(Assets.getAtlas().getTexture("darnahead1")); iconSmall.x = 360; iconSmall.y = 40; this.addChild(iconSmall); this.addChild(scoreText); this.addChild(remainingLives); } private function drawGame():void { bg = new GameBackground(); this.addChild(bg); hero = new Hero(); hero.x = stage.stageHeight / 2; hero.y = stage.stageWidth / 2; this.addChild(hero); startButton = new Button(Assets.getAtlas().getTexture("startButton")); startButton.x = stage.stageWidth * 0.5 - startButton.width * 0.5; startButton.y = stage.stageHeight * 0.5 - startButton.height * 0.5; this.addChild(startButton); gameArea = new Rectangle(0, 100, stage.stageWidth, stage.stageHeight - 250); } public function disposeTemporarily():void { this.visible = false; } public function initialize():void { this.visible = true; this.addEventListener(Event.ENTER_FRAME, checkElapsed); hero.x = -stage.stageWidth; hero.y = stage.stageHeight * 0.5; gameState ="idle"; playerSpeed = 0; hitObstacle = 0; bg.speed = 0; scoreDistance = 0; obstacleGapCount = 0; obstaclesToAnimate = new Vector.<Obstacle>(); itemsToAnimate = new Vector.<Item>(); startButton.addEventListener(Event.TRIGGERED, onStartButtonClick); //var mainStage:InGame =InGame.current.nativeStage; //mainStage.dispatchEvent(new Event(Event.COMPLETE)); //playAgain.addEventListener(Event.TRIGGERED, onRetry); } private function onStartButtonClick(event:Event):void { startButton.visible = false; startButton.removeEventListener(Event.TRIGGERED, onStartButtonClick); launchHero(); } private function launchHero():void { this.addEventListener(TouchEvent.TOUCH, onTouch); this.addEventListener(Event.ENTER_FRAME, onGameTick); } private function onTouch(event:TouchEvent):void { touch = event.getTouch(stage); touchX = touch.globalX; touchY = touch.globalY; } private function onGameTick(event:Event):void { switch(gameState) { case "idle": if(hero.x < stage.stageWidth * 0.5 * 0.5) { hero.x += ((stage.stageWidth * 0.5 * 0.5 + 10) - hero.x) * 0.05; hero.y = stage.stageHeight * 0.5; playerSpeed += (MIN_SPEED - playerSpeed) * 0.05; bg.speed = playerSpeed * elapsed; } else { gameState = "flying"; } break; case "flying": if(hitObstacle <= 0) { hero.y -= (hero.y - touchY) * 0.1; if(-(hero.y - touchY) < 150 && -(hero.y - touchY) > -150) { hero.rotation = deg2rad(-(hero.y - touchY) * 0.2); } if(hero.y > gameArea.bottom - hero.height * 0.5) { hero.y = gameArea.bottom - hero.height * 0.5; hero.rotation = deg2rad(0); } if(hero.y < gameArea.top + hero.height * 0.5) { hero.y = gameArea.top + hero.height * 0.5; hero.rotation = deg2rad(0); } } else { hitObstacle-- cameraShake(); } playerSpeed -= (playerSpeed - MIN_SPEED) * 0.01; bg.speed = playerSpeed * elapsed; scoreDistance += (playerSpeed * elapsed) * 0.1; scoreText.text = "Score: " + scoreDistance; initObstacle(); animateObstacles(); createEggItems(); animateItems(); remainingLives.text = "Lives: "+lives + " X "; if(lives == 0) { gameState = "over"; } break; case "over": gameOver(); break; } } private function gameOver():void { gameOverText = new TextField(800, 400, "Hero WAS KILLED!!!", "MyFontName", 50, 0xD9D919, true); scoreText = new TextField(800, 600, "Score: "+scoreDistance, "MyFontName", 30, 0xFFFFFF, true); this.addChild(scoreText); this.addChild(gameOverText); playAgain = new Button(Assets.getAtlas().getTexture("button_tryAgain")); playAgain.x = stage.stageWidth * 0.5 - startButton.width * 0.5; playAgain.y = stage.stageHeight * 0.75 - startButton.height * 0.75; this.addChild(playAgain); playAgain.addEventListener(Event.TRIGGERED, onRetry); } private function onRetry(event:Event):void { playAgain.visible = false; gameOverText.visible = false; scoreText.visible = false; var btnClicked:Button = event.target as Button; if((btnClicked as Button) == playAgain) { this.dispatchEvent(new NavigationEvent(NavigationEvent.CHANGE_SCREEN, {id: "playnow"}, true)); } disposeTemporarily(); } private function animateItems():void { var itemToTrack:Item; for(var i:uint = 0; i < itemsToAnimate.length; i++) { itemToTrack = itemsToAnimate[i]; itemToTrack.x -= playerSpeed * elapsed; if(itemToTrack.bounds.intersects(hero.bounds)) { itemsToAnimate.splice(i, 1); this.removeChild(itemToTrack); } if(itemToTrack.x < -50) { itemsToAnimate.splice(i, 1); this.removeChild(itemToTrack); } } } private function createEggItems():void { if(Math.random() > 0.95){ var itemToTrack:Item = new Item(Math.ceil(Math.random() * 10)); itemToTrack.x = stage.stageWidth + 50; itemToTrack.y = int(Math.random() * (gameArea.bottom - gameArea.top)) + gameArea.top; this.addChild(itemToTrack); itemsToAnimate.push(itemToTrack); } } private function cameraShake():void { if(hitObstacle > 0) { this.x = Math.random() * hitObstacle; this.y = Math.random() * hitObstacle; } else if(x != 0) { this.x = 0; this.y = 0; lives--; } } private function initObstacle():void { if(obstacleGapCount < 1200) { obstacleGapCount += playerSpeed * elapsed; } else if(obstacleGapCount !=0) { obstacleGapCount = 0; createObstacle(Math.ceil(Math.random() * 5), Math.random() * 1000 + 1000); } } private function animateObstacles():void { var obstacleToTrack:Obstacle; for(var i:uint = 0; i<obstaclesToAnimate.length; i++) { obstacleToTrack = obstaclesToAnimate[i]; if(obstacleToTrack.alreadyHit == false && obstacleToTrack.bounds.intersects(hero.bounds)) { obstacleToTrack.alreadyHit = true; obstacleToTrack.rotation = deg2rad(70); hitObstacle = 30; playerSpeed *= 0.5; } if(obstacleToTrack.distance > 0) { obstacleToTrack.distance -= playerSpeed * elapsed; } else { if(obstacleToTrack.watchOut) { obstacleToTrack.watchOut = false; } obstacleToTrack.x -= (playerSpeed + obstacleToTrack.speed) * elapsed; } if(obstacleToTrack.x < -obstacleToTrack.width || gameState == "over") { obstaclesToAnimate.splice(i, 1); this.removeChild(obstacleToTrack); } } } private function checkElapsed(event:Event):void { timePrevious = timeCurrent; timeCurrent = getTimer(); elapsed = (timeCurrent - timePrevious) * 0.001; } private function createObstacle(type:Number, distance:Number):void{ var obstacle:Obstacle = new Obstacle(type, distance, true, 300); obstacle.x = stage.stageWidth; this.addChild(obstacle); if(type >= 4) { if(Math.random() > 0.5) { obstacle.y = gameArea.top; obstacle.position = "top" } else { obstacle.y = gameArea.bottom - obstacle.height; obstacle.position = "bottom"; } } else { obstacle.y = int(Math.random() * (gameArea.bottom - obstacle.height - gameArea.top)) + gameArea.top; obstacle.position = "middle"; } obstaclesToAnimate.push(obstacle); } } }

    Read the article

  • Lançamento do Oracle Enterprise Manager 11g - (27/Mai/10)

    - by Claudia Costa
    Não perca este evento exclusivo para executivos, responsáveis de TI e Parceiros Oracle, e explore em que medida a versão mais recente do Oracle Enterprise Manager permite que a gestão das TI seja orientada para o negócio. Registe-se hoje! Descubra as novas capacidades do Oracle Enterprise Manager 11g, que incluem: ·         Gestão integrada, desda a aplicação até ao Cloud Computing, visando a maximização do retorno do investimento em TI ·         Gestão de aplicações orientadas para o negócio, que permte ao departamento de TI identificar e corrigir os problemas antes de estes terem impacto no negócio ·         Gestão e suporte intregrados dos sistemas, fornecendo notificações e correcções proactivas, associadas à partilha de conhecimento entre pares, para aumentar a satisfação dos clientes Junte-se a nós e fique a saber como somente o Oracle Enterprise Manager 11g pode ajudar as TI a melhorarem proactivamente o valor empresarial em diversas tecnologias, incluindo sistemas Sun; sistema operativo Oracle Solaris; Oracle Database; Oracle Fusion Middleware; Oracle E Business Suite; soluções Siebel, PeopleSoft e JD Edwards da Oracle; tecnologias de virtualização e ambientes de nuvem privada. Irá decorrer uma sessão exclusiva para parceiros da Oracle onde falará de temas como a especialização e exploração de oportunidades de negócio conjunto nas áreas de Gestão de aplicações e sitemas. Agenda - Sana Lisboa Park Hotel Avenida Fontes Pereira de Melo, 8 Lisboa Quinta-Feira, 27 de Maio de 2010 Horario: 9:00- 15:30h 9:00    Registo e Café 9:30    Introdução 9:40    Keynote: Business-driven IT Mnagement with Oracle Enterprise Manager 11g 10:25  Experiências de Cliente 11:00  Pausa 11:15  Integrated Application-to-disk Mangement 11:45  Business-driven Application Management 12:15  Integrated Cloud Management 12:45  Integrated Systems Management and Support Experience 13:15  Almoço 14:30  Sessão para Parceiros - Especialização e Oportunidades de negócio com Oracle      Enterprise Manager   Registe-se hoje mesmo para reservar o seu lugar neste evento exclusivo.      

    Read the article

  • Tips for debugging Samba performance?

    - by j-g-faustus
    Samba gives me 24 MB/s read and 44 MB/s write, while ftp gives 97 and 112 MB/s under the same circumstances. The documentation says that Generally, you should find that Samba performs similarly to ftp at raw transfer speed. In my case it clearly doesn't. Where can I find tips on how to debug Samba performance? Or alternatively tips for replacing Samba with something else? (I can't use ftp, unfortunately, as I need something that can be used with rsync/rsnapshot.) More details: Both computers are running Ubuntu 10.10 (using Samba because I have a Mac as well) The Samba share is on a local home network, mounted as $ mount ... //server.local/share/ on /mnt/share type cifs (rw,mand) Samba performance was tested by copying (cp) a single file of ~4GB to and from the share, using time for timing and calculating transfer speed by hand. ftp performance are the numbers from the ftp client for get/put of the same file. iperf gives network speed ~900 Mbits/s bonnie++ gives disk speeds 200 MB/s on both sides for block reads as well as block writes Tried changing the parameters suggested in the performance tuning HOWTO (read/write raw, read size, socket options), most of them made little to no difference. (The one that made a difference caused write speed to drop 50%.)

    Read the article

  • Encrypting a non-linux partition with LUKS.

    - by linuxn00b
    I have a non-Linux partition I want to encrypt with LUKS. The goal is to be able to store it by itself on a device without Linux and access it from the device when needed with an Ubuntu Live CD. I know LUKS can't encrypt partitions in place, so I created another, unformatted partition of the EXACT same size (using GParted's "Round to MiB" option) and ran this command: sudo cryptsetup luksFormat /dev/xxx Where xxx is the partition's device name. Then I typed in my new passphrase and confirmed it. Oddly, the command exited immediately after, so I guess it doesn't encrypt the entire partition right away? Anyway, then I ran this command: sudo cryptsetup luksOpen /dev/xxx xxx Then I tried copying the contents of the existing partition (call it yyy) to the encrypted one like this: sudo dd if=/dev/yyy of=/dev/mapper/xxx bs=1MB and it ran for a while, but exited with this: dd: writing `/dev/mapper/xxx': No space left on device just before writing the last MB. I take this to mean the contents of yyy was truncated when it was copied to xxx, because I have dd'd it before, and whenever I have dd'd to a partition of the exact same size, I never get that error. (and fdisk reports they are the same size in blocks). After a little Googling I discovered all luksFormat'ted partitions have a custom header followed by the encrypted contents. So it appears I need to create a partition exactly the size of the old one + however many bytes a LUKS header is. What size should the destination partition be, no. 1, and no. 2, am I even on the right track here? UPDATE I found this in the LUKS FAQ: I think this is overly complicated. Is there an alternative? Yes, you can use plain dm-crypt. It does not allow multiple passphrases, but on the plus side, it has zero on disk description and if you overwrite some part of a plain dm-crypt partition, exactly the overwritten parts are lost (rounded up to sector borders). So perhaps I shouldn't be using LUKS at all?

    Read the article

  • Stream.CopyTo() extension method

    - by DigiMortal
    In one of my applications I needed copy data from one stream to another. After playing with streams a little bit I wrote CopyTo() extension method to Stream class you can use to copy the contents of current stream to target stream. Here is my extension method. It is my working draft and it is possible that there must be some more checks before we can say this extension method is ready to be part of some API or class library. public static void CopyTo(this Stream fromStream, Stream toStream) {     if (fromStream == null)         throw new ArgumentNullException("fromStream");     if (toStream == null)         throw new ArgumentNullException("toStream");       var bytes = new byte[8092];     int dataRead;     while ((dataRead = fromStream.Read(bytes, 0, bytes.Length)) > 0)         toStream.Write(bytes, 0, dataRead); } And here is example how to use this extension method. using(var stream = response.GetResponseStream()) using(var ms = new MemoryStream()) {     stream.CopyTo(ms);       // Do something with copied data } I am using this code to copy data from HTTP response stream to memory stream because I have to use serializer that needs more than response stream is able to offer.

    Read the article

  • High number of ethernet errors. Tool for testing the ethernet card?

    - by Fabio Dalla Libera
    I have an Asus Sabertooth X79. I often get corrupted files. I checked the RAM, but memtest finds no errors. To avoid the possibility of disk errors, I tried copying the files to tmpfs. If I copy from the network, I get md5sum mismatches about once every 10 times using a 6Gb file. Copying from RAM to RAM, I didn't get mismatches. I get a very high number of errors in ifconfig (compared to others PCs I just took as reference, which have 0 with much more traffic). Here is an example RX packets:13972848 errors:200 dropped:0 overruns:0 frame:101 The motherboard is new, but do you think there're some problems with it? What could I use to test the (integrated) network adapter? What else do you think I should double check? --edit-- I tried another NIC, it gives a lot of Corrupted MAC on Input. Disconnecting: Packet corrupt lost connection. I noticed that another PC downloads at 11.1MB/s without problems. This pc at 66.0 MB/s. Is there any way to try to limit the speed?

    Read the article

  • Wine / PlayOnLinux dependency issues when trying to install

    - by Glutanimate
    I am curious as to why installing PlayOnLinux entails removing seemingly unrelated packages like google-earth-stable. Is this the expected behaviour? This is the output I get when trying to install playonlinux through apt-get: The following packages were automatically installed and are no longer required: python-scour pax ncurses-term Use 'apt-get autoremove' to remove them. The following extra packages will be installed: binfmt-support fonts-horai-umefont fonts-unfonts-core libcapi20-3 libgif4:i386 libmpg123-0 libodbc1 libpam-winbind ttf-umefont ttf-unfonts-core unixodbc winbind wine wine-gecko1.4 wine-gecko1.4:i386 wine1.4 wine1.4-amd64 wine1.4-common wine1.4-i386:i386 winetricks Suggested packages: libmyodbc odbc-postgresql tdsodbc unixodbc-bin dosbox Recommended packages: gettext:i386 unixodbc:i386 The following packages will be REMOVED: alien cdbs debhelper dh-make dh-translations gettext google-earth-stable intltool intltool-debian lsb-core po-debconf The following NEW packages will be installed: binfmt-support fonts-horai-umefont fonts-unfonts-core libcapi20-3 libgif4:i386 libmpg123-0 libodbc1 libpam-winbind playonlinux ttf-umefont ttf-unfonts-core unixodbc winbind wine wine-gecko1.4 wine-gecko1.4:i386 wine1.4 wine1.4-amd64 wine1.4-common wine1.4-i386:i386 winetricks 0 upgraded, 21 newly installed, 11 to remove and 0 not upgraded. Need to get 145 MB of archives. After this operation, 275 MB of additional disk space will be used. This is the first time I am trying to install Wine / POL. I am using the default repositories, no Wine PPA or POL source added. These are all the PPAs I am using: How do I install POL / Wine without having to remove all these packages?

    Read the article

  • links for 2010-03-18

    - by Bob Rhubart
    Oracle Database HA Architecture « The Oracle Instructor Oracle Certified Master Uwe Hesse introduces his blog's new Oracle Database HA Architecture page. (tags: oracle otn highavailability database) Mario Morgado: Where is the value of Enterprise Architecture? "When we purchase a product, its value is equivalent to the maximum amount that someone is willing to pay for the product. However, is the same equation valid in terms of the business value of enterprise architecture?" Mario Morgado (tags: otn oracle enterprisearchitecture) Steve Wilson: Managing Application to Disk "Of course, what we're introducing today goes beyond a mere re-skinning of Sun Ops Center. The promise is to offer real integration, and now we're delivering on the first phase in that roadmap by introducing the Oracle Management Connector for Ops Center. This software allows customers to connect an instance of Ops Center to an instance of Oracle Enterprise Manager's grid control server and connect the event streams of the two products, allowing for new levels of visibility into the customer's systems when using the combination of Oracle and Sun technology." "Virtual" Steve Wilson (tags: oraclesun opscenter)

    Read the article

  • quick look at: dm_db_index_physical_stats

    - by fatherjack
    A quick look at the key data from this dmv that can help a DBA keep databases performing well and systems online as the users need them. When the dynamic management views relating to index statistics became available in SQL Server 2005 there was much hype about how they can help a DBA keep their servers running in better health than ever before. This particular view gives an insight into the physical health of the indexes present in a database. Whether they are use or unused, complete or missing some columns is irrelevant, this is simply the physical stats of all indexes; disabled indexes are ignored however. In it’s simplest form this dmv can be executed as:   The results from executing this contain a record for every index in every database but some of the columns will be NULL. The first parameter is there so that you can specify which database you want to gather index details on, rather than scan every database. Simply specifying DB_ID() in place of the first NULL achieves this. In order to avoid the NULLS, or more accurately, in order to choose when to have the NULLS you need to specify a value for the last parameter. It takes one of 4 values – DEFAULT, ‘SAMPLED’, ‘LIMITED’ or ‘DETAILED’. If you execute the dmv with each of these values you can see some interesting details in the times taken to complete each step. DECLARE @Start DATETIME DECLARE @First DATETIME DECLARE @Second DATETIME DECLARE @Third DATETIME DECLARE @Finish DATETIME SET @Start = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, DEFAULT) AS ddips SET @First = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'SAMPLED') AS ddips SET @Second = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'LIMITED') AS ddips SET @Third = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'DETAILED') AS ddips SET @Finish = GETDATE() SELECT DATEDIFF(ms, @Start, @First) AS [DEFAULT] , DATEDIFF(ms, @First, @Second) AS [SAMPLED] , DATEDIFF(ms, @Second, @Third) AS [LIMITED] , DATEDIFF(ms, @Third, @Finish) AS [DETAILED] Running this code will give you 4 result sets; DEFAULT will have 12 columns full of data and then NULLS in the remainder. SAMPLED will have 21 columns full of data. LIMITED will have 12 columns of data and the NULLS in the remainder. DETAILED will have 21 columns full of data. So, from this we can deduce that the DEFAULT value (the same one that is also applied when you query the view using a NULL parameter) is the same as using LIMITED. Viewing the final result set has some details that are worth noting: Running queries against this view takes significantly longer when using the SAMPLED and DETAILED values in the last parameter. The duration of the query is directly related to the size of the database you are working in so be careful running this on big databases unless you have tried it on a test server first. Let’s look at the data we get back with the DEFAULT value first of all and then progress to the extra information later. We know that the first parameter that we supply has to be a database id and for the purposes of this blog we will be providing that value with the DB_ID function. We could just as easily put a fixed value in there or a function such as DB_ID (‘AnyDatabaseName’). The first columns we get back are database_id and object_id. These are pretty explanatory and we can wrap those in some code to make things a little easier to read: SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName] … FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips  gives us   SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName], [i].[name] AS [IndexName] , ….. FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips INNER JOIN [sys].[indexes] AS i ON [ddips].[index_id] = [i].[index_id] AND [ddips].[object_id] = [i].[object_id]     These handily tie in with the next parameters in the query on the dmv. If you specify an object_id and an index_id in these then you get results limited to either the table or the specific index. Once again we can place a  function in here to make it easier to work with a specific table. eg. SELECT * FROM [sys].[dm_db_index_physical_stats] (DB_ID(), OBJECT_ID(‘AdventureWorks2008.Person.Address’) , 1, NULL, NULL) AS ddips   Note: Despite me showing that functions can be placed directly in the parameters for this dmv, best practice recommends that functions are not used directly in the function as it is possible that they will fail to return a valid object ID. To be certain of not passing invalid values to this function, and therefore setting an automated process off on the wrong path, declare variables for the OBJECT_IDs and once they have been validated, use them in the function: DECLARE @db_id SMALLINT; DECLARE @object_id INT; SET @db_id = DB_ID(N’AdventureWorks_2008′); SET @object_id = OBJECT_ID(N’AdventureWorks_2008.Person.Address’); IF @db_id IS NULL BEGINPRINT N’Invalid database’; ENDELSE IF @object_id IS NULL BEGINPRINT N’Invalid object’; ENDELSE BEGINSELECT * FROM sys.dm_db_index_physical_stats (@db_id, @object_id, NULL, NULL , ‘LIMITED’); END; GO In cases where the results of querying this dmv don’t have any effect on other processes (i.e. simply viewing the results in the SSMS results area)  then it will be noticed when the results are not consistent with the expected results and in the case of this blog this is the method I have used. So, now we can relate the values in these columns to something that we recognise in the database lets see what those other values in the dmv are all about. The next columns are: We’ll skip partition_number, index_type_desc, alloc_unit_type_desc, index_depth and index_level  as this is a quick look at the dmv and they are pretty self explanatory. The final columns revealed by querying this view in the DEFAULT mode are avg_fragmentation_in_percent. This is the amount that the index is logically fragmented. It will show NULL when the dmv is queried in SAMPLED mode. fragment_count. The number of pieces that the index is broken into. It will show NULL when the dmv is queried in SAMPLED mode. avg_fragment_size_in_pages. The average size, in pages, of a single fragment in the leaf level of the IN_ROW_DATA allocation unit. It will show NULL when the dmv is queried in SAMPLED mode. page_count. Total number of index or data pages in use. OK, so what does this give us? Well, there is an obvious correlation between fragment_count, page_count and avg_fragment_size-in_pages. We see that an index that takes up 27 pages and is in 3 fragments has an average fragment size of 9 pages (27/3=9). This means that for this index there are 3 separate places on the hard disk that SQL Server needs to locate and access to gather the data when it is requested by a DML query. If this index was bigger than 72KB then having it’s data in 3 pieces might not be too big an issue as each piece would have a significant piece of data to read and the speed of access would not be too poor. If the number of fragments increases then obviously the amount of data in each piece decreases and that means the amount of work for the disks to do in order to retrieve the data to satisfy the query increases and this would start to decrease performance. This information can be useful to keep in mind when considering the value in the avg_fragmentation_in_percent column. This is arrived at by an internal algorithm that gives a value to the logical fragmentation of the index taking into account the multiple files, type of allocation unit and the previously mentioned characteristics if index size (page_count) and fragment_count. Seeing an index with a high avg_fragmentation_in_percent value will be a call to action for a DBA that is investigating performance issues. It is possible that tables will have indexes that suffer from rapid increases in fragmentation as part of normal daily business and that regular defragmentation work will be needed to keep it in good order. In other cases indexes will rarely become fragmented and therefore not need rebuilding from one end of the year to another. Keeping this in mind DBAs need to use an ‘intelligent’ process that assesses key characteristics of an index and decides on the best, if any, defragmentation method to apply should be used. There is a simple example of this in the sample code found in the Books OnLine content for this dmv, in example D. There are also a couple of very popular solutions created by SQL Server MVPs Michelle Ufford and Ola Hallengren which I would wholly recommend that you review for much further detail on how to care for your SQL Server indexes. Right, let’s get back on track then. Querying the dmv with the fifth parameter value as ‘DETAILED’ takes longer because it goes through the index and refreshes all data from every level of the index. As this blog is only a quick look a we are going to skate right past ghost_record_count and version_ghost_record_count and discuss avg_page_space_used_in_percent, record_count, min_record_size_in_bytes, max_record_size_in_bytes and avg_record_size_in_bytes. We can see from the details below that there is a correlation between the columns marked. Column 1 (Page_Count) is the number of 8KB pages used by the index, column 2 is how full each page is (how much of the 8KB has actual data written on it), column 3 is how many records are recorded in the index and column 4 is the average size of each record. This approximates to: ((Col1*8) * 1024*(Col2/100))/Col3 = Col4*. avg_page_space_used_in_percent is an important column to review as this indicates how much of the disk that has been given over to the storage of the index actually has data on it. This value is affected by the value given for the FILL_FACTOR parameter when creating an index. avg_record_size_in_bytes is important as you can use it to get an idea of how many records are in each page and therefore in each fragment, thus reinforcing how important it is to keep fragmentation under control. min_record_size_in_bytes and max_record_size_in_bytes are exactly as their names set them out to be. A detail of the smallest and largest records in the index. Purely offered as a guide to the DBA to better understand the storage practices taking place. So, keeping an eye on avg_fragmentation_in_percent will ensure that your indexes are helping data access processes take place as efficiently as possible. Where fragmentation recurs frequently then potentially the DBA should consider; the fill_factor of the index in order to leave space at the leaf level so that new records can be inserted without causing fragmentation so rapidly. the columns used in the index should be analysed to avoid new records needing to be inserted in the middle of the index but rather always be added to the end. * – it’s approximate as there are many factors associated with things like the type of data and other database settings that affect this slightly.  Another great resource for working with SQL Server DMVs is Performance Tuning with SQL Server Dynamic Management Views by Louis Davidson and Tim Ford – a free ebook or paperback from Simple Talk. Disclaimer – Jonathan is a Friend of Red Gate and as such, whenever they are discussed, will have a generally positive disposition towards Red Gate tools. Other tools are often available and you should always try others before you come back and buy the Red Gate ones. All code in this blog is provided “as is” and no guarantee, warranty or accuracy is applicable or inferred, run the code on a test server and be sure to understand it before you run it on a server that means a lot to you or your manager.

    Read the article

  • How do I choose a package format for Linux software distribution?

    - by Ian C.
    We have a Java-based application that, to date, we've been distributing as a tarball with instructions for deploying. It's mostly self-contained so deployment is fairly straight-forward: Untar on the disk you'd like it to live on; Make sure Java is in your path and a suitable distro and version; Verify ownership and group on all the files Start up the server processes with our start script If the user wants to get in to start-on-boot stuff with SysV we have some written instructions and a template init file for it in our tarball. We'd like to make this installation process a little more seamless; take care of the permissions and the init script deployment. We're also going to start bundling our own JRE with the application so that we're mostly free of external dependencies. The question we're faced with now is: how do we pick a package format for distribution? Is RPM the standard? Can all package management tools deal with it now? Our clients primarily run RHEL and CentOS, but we do have some using SuSE and even Debian. If we can pick a distro-agnostic format we'd prefer that. What about a self-extracting shell script? Something akin to how Java is distributed. If we're dependency-free would the self-extracting script be sufficient? What features or conveniences would we lose out on going with the script versus a proper package format meant for use by a package manager?

    Read the article

  • Drawing a circle in opengl es android, squiggly boundaries

    - by ladiesMan217
    I am new to OpenGL ES and facing a hard time drawing a circle on my GLSurfaceView. Here's what I have so far. the Circle Class public class MyGLBall { private int points=40; private float vertices[]={0.0f,0.0f,0.0f}; private FloatBuffer vertBuff; //centre of circle public MyGLBall(){ vertices=new float[(points+1)*3]; for(int i=3;i<(points+1)*3;i+=3){ double rad=(i*360/points*3)*(3.14/180); vertices[i]=(float)Math.cos(rad); vertices[i+1]=(float) Math.sin(rad); vertices[i+2]=0; } ByteBuffer bBuff=ByteBuffer.allocateDirect(vertices.length*4); bBuff.order(ByteOrder.nativeOrder()); vertBuff=bBuff.asFloatBuffer(); vertBuff.put(vertices); vertBuff.position(0); } public void draw(GL10 gl){ gl.glPushMatrix(); gl.glTranslatef(0, 0, 0); // gl.glScalef(size, size, 1.0f); gl.glColor4f(1.0f,1.0f,1.0f, 1.0f); gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertBuff); gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glDrawArrays(GL10.GL_TRIANGLE_FAN, 0, points/2); gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); gl.glPopMatrix(); } } I couldn't retrieve the screenshot of my image but here's what it looks like As you can see the border has crests and troughs thereby renering it squiggly which I do not want. All I want is a simple curve

    Read the article

  • SSMS Tools Pack 1.9.3 is out!

    - by Mladen Prajdic
    This release adds a great new feature and fixes a few bugs. The new feature called Window Content History saves the whole text in all all opened SQL windows every N minutes with the default being 30 minutes. This feature fixes the shortcoming of the Query Execution History which is saved only when the query is run. If you're working on a large script and never execute it, the existing Query Execution History wouldn't save it. By contrast the Window Content History saves everything in a .sql file so you can even open it in your SSMS. The Query Execution History and Window Content History files are correlated by the same directory and file name so when you search through the Query Execution History you get to see the whole saved Window Content History for that query. Because Window Content History saves data in simple searchable .sql files there isn't a special search editor built in. It is turned ON by default but despite the built in optimizations for space minimization, be careful to not let it fill your disk. You can see how it looks in the pictures in the feature list. The fixed bugs are: SSMS 2008 R2 slowness reported by few people. An object explorer context menu bug where it showed multiple SSMS Tools entries and showed wrong entries for a node. A datagrid bug in SQL snippets. Ability to read illegal XML characters from log files. Fixed the upper limit bug of a saved history text to 5 MB. A bug when searching through result sets prevents search. A bug with Text formatting erroring out for certain scripts. A bug with finding servers where it would return null even though servers existed. Run custom scripts objects had a bug where |SchemaName| didn't display the correct table schema for columns. This is fixed. Also |NodeName| and |ObjectName| values now show the same thing.   You can download the new version 1.9.3 here. Enjoy it!

    Read the article

  • Lots of great stuff going on with Oracle Secure Global Desktop!

    - by Chris Kawalek
    You're probably familiar with Oracle Secure Global Desktop, our solution for providing secure, browser-based access to Oracle Applications and other enterprise software. It's a fantastic product and one I've been personally involved with for nearly a decade! I wanted to give you a quick update on all the fantastic things that are going on with it: First, we have done a few videos with Oracle's Mohan Prabhala at trade shows recently. You can get a quick product refresher and an update on the latest new features by watching these: Next, we talked at length with Brian Madden and Gabe Knuth on Brian and Gabe LIVE about Oracle Secure Global Desktop. Click here or on the screenshot below to go to the brianmadden.com video. Part 1 focuses on Oracle Secure Global Desktop. Listen toward the end for Brian to say, “I kinda want this actually at TechTarget right now.” The analysts are talking about us, too. When we released Oracle Secure Global Desktop 4.7, Chris Wolf over at Gartner had this to say on Twitter. Last, just a quick reminder for existing Oracle Applications customers that Oracle Secure Global Desktop is easy for you to leverage for secure application access. Oracle Secure Global desktop is certified for use with Oracle browser-based applications such as Primavera, E-Business Suite and with Exalogic. Steven Chan over at the E-Business Suite Technology blog gives a great explanation of how Oracle Secure Global Desktop works with E-Business Suite, as an example. As the title says, lots of great stuff going on! -Chris

    Read the article

  • JQGrdi PDF Export

    - by thanigai
    Originally posted on: http://geekswithblogs.net/thanigai/archive/2013/06/17/jqgrdi-pdf-export.aspxJQGrid PDF Export The aim of this article is to address the PDF export from client side grid frameworks. The solution is done using the ASP.Net MVC 4 and VisualStudio 2012. The article assumes the developer to have a fair amount of knowledge on ASP.Net MVC and C#. Tools Used Visual Studio 2012 ASP.Net MVC 4 Nuget Package Manager JQGrid  is one of the client grid framework built on top of the JQuery framework. It helps in building a beautiful grid with paging, sorting and exiting options. There are also other features available as extension plugins and developers can write their own if needed. You can download the JQgrid from the  JQGrid  homepage or as NUget package. I have given below the command to download the JQGrid through the package manager console. From the tools menu select “Library Package Manager” and then select “Package Manager Console”. I have given the screenshot below. This command will pull down the latest JQGrid package and adds them in the script folder. Once the script is downloaded and referenced in the project update the bundleconfig file to add the script reference in the pages. Bundleconfig can be found in the  App_Start  folder in the project structure. bundles .Add (newStyleBundle(“~/Content/jqgrid”).Include (“~/Content/ui.jqgrid.css”)); bundles.Add( newScriptBundle( “~/bundles/jquerygrid”) .Include( “~/Scripts/jqGrid/jquery.jqGrid*”)); Once added the config’s refer the bundles to the Views/Shared/LayoutPage.cshtml. Add the following lines to the head section of the page. @Styles.Render(“~/Content/jqgrid”) Add the following lines to the end of the page before html close tags. @Scripts.Render(“~/bundles/jquery”) @Scripts.Render(“~/bundles/jqueryui”) @Scripts.Render(“ ~/bundles/jquerygrid”)              That’s all to be done from the view perspective. Once these steps are done the developer can start coding for the JQGrid. In this example we will modify the HomeController for the demo. The index action will be the default action. We will add an argument for this index action. Let it be nullable bool. It’s just to mark the pdf request. In the Index.cshtml we will add a table tag with an id “ gridTable “. We will use this table for making the grid. Since JQGrid is an extension for the JQUery we will initialize the grid setting at the  script  section of the page. This script section is marked at the end of the page to improve performance. The script section is placed just below the bundle reference for JQuery and JQueryUI. This is the one of improvement factors from “ why slow” provided by yahoo. < tableid=“gridTable”class=“scroll”></ table> < inputtype=“button”value=“Export PDF”onclick=“exportPDF();“/>  @section scripts { <scripttype=“text/javascript”> $(document).ready(function(){$(“#gridTable”).jqGrid({datatype:“json”,url:‘@Url.Action(“GetCustomerDetails”)‘,mtype:‘GET’,colNames:["CustomerID","CustomerName","Location","PrimaryBusiness"],colModel:[{name:"CustomerID",width:40,index:"CustomerID",align:"center"},{name:"CustomerName",width:40,index:"CustomerName",align:"center"},{name:"Location",width:40,index:"Location",align:"center"},{name:"PrimaryBusiness",width:40,index:"PrimaryBusiness",align:"center"},],height:250,autowidth:true,sortorder:“asc”,rowNum:10,rowList:[5,10,15,20],sortname:“CustomerID”,viewrecords:true});});  function exportPDF (){ document . location = ‘ @ Url . Action ( “Index” ) ?pdf=true’ ; } </ script >  } The exportPDF methos just sets the document location to the Index action method with PDF Boolean as true just to mark for download PDF. An inmemory list collection is used for demo purpose. The  GetCustomerDetailsmethod is the server side action method that will provide the data as JSON list. We will see the method explanation below. [ HttpGet] publicJsonResultGetCustomerDetails(){ varresult=new { total=1, page=1, records=customerList.Count(), rows=( customerList.Select( e=>new { id=e.CustomerID, cell=newstring[]{ e.CustomerID.ToString(), e.CustomerName, e.Location, e.PrimaryBusiness}})) .ToArray()}; returnJson( result,  JsonRequestBehavior.AllowGet); }   JQGrid can understand the response data from server in certain format. The server method shown above is taking care of formatting the response so that JQGrid understand the data properly. The response data should contain totalpages, current page, full record count, rows of data with id and remaining columns as string array. The response is built using an anonymous object and will be sent as a MVC JsonResult. Since we are using HttpGet it’s better to mark the attribute as HttpGet and also the JSON requestbehavious as AllowGet. The inmemory list is initialized in the homecontroller constructor for reference. Public class HomeController : Controller{ private readonly Ilist < CustomerViewModel > customerList ; public HomeController (){ customerList=newList<CustomerViewModel>() { newCustomerViewModel{ CustomerID=100, CustomerName=“Sundar”, Location=“Chennai”, PrimaryBusiness=“Teacing”}, newCustomerViewModel{ CustomerID=101, CustomerName=“Sudhagar”, Location=“Chennai”, PrimaryBusiness=“Software”}, newCustomerViewModel{ CustomerID=102, CustomerName=“Thivagar”, Location=“China”, PrimaryBusiness=“SAP”}, }; }  publicActionResultIndex( bool?pdf){ if ( !pdf.HasValue){ returnView( customerList);} else{ stringfilePath=Server.MapPath( “Content”)  +“Sample.pdf”; ExportPDF( customerList,  new string[]{  “CustomerID”,  “CustomerName”,  “Location”,  “PrimaryBusiness” },  filePath); return File ( filePath ,  “application/pdf” , “list.pdf” ); }}   The index actionmethod has a Boolean argument named “pdf”. It’s used to indicate for PDF download. When the application starts this method is first hit for initial page request. For PDF operation a filename is generated and then sent to the  ExportPDF  method which will take care of generating the PDF from the datasource. The  ExportPDF method is listed below.  Private static void ExportPDF<TSource>(IList<TSource>customerList,string [] columns, string filePath){ FontheaderFont=FontFactory.GetFont( “Verdana”,  10,  Color.WHITE); Fontrowfont=FontFactory.GetFont( “Verdana”,  10,  Color.BLUE); Documentdocument=newDocument( PageSize.A4);  PdfWriter writer = PdfWriter . GetInstance ( document ,  new FileStream ( filePath ,  FileMode . OpenOrCreate )); document.Open(); PdfPTabletable=newPdfPTable( columns.Length); foreach ( varcolumnincolumns){ PdfPCellcell=newPdfPCell( newPhrase( column,  headerFont)); cell.BackgroundColor=Color.BLACK; table.AddCell( cell); }  foreach  ( var item in customerList ) { foreach ( varcolumnincolumns){ stringvalue=item.GetType() .GetProperty( column) .GetValue( item) .ToString(); PdfPCellcell5=newPdfPCell( newPhrase( value,  rowfont)); table.AddCell( cell5); } }  document.Add( table); document.Close(); }   iTextSharp is one of the pioneer in PDF export. It’s an opensource library readily available as NUget library. This command will pulldown latest available library. I am using the version 4.1.2.0. The latest version may have changed. There are three main things in this library. Document This is the document class which takes care of creating the document sheet with particular size. We have used A4 size. There is also an option to define the rectangle size. This document instance will be further used in next methods for reference. PdfWriter PdfWriter takes the filename and the document as the reference. This class enables the document class to generate the PDF content and save them in a file. Font Using the FONT class the developer can control the font features. Since I need a nice looking font I am giving the Verdana font. Following this PdfPTable and PdfPCell are used for generating the normal table layout. We have created two set of fonts for header and footer. Font headerFont=FontFactory .GetFont(“Verdana”, 10, Color .WHITE); Font rowfont=FontFactory .GetFont(“Verdana”, 10, Color .BLUE);   We are getting the header columns as string array. Columns argument array is looped and header is generated. We are using the headerfont for this purpose. PdfWriter writer=PdfWriter .GetInstance(document, newFileStream (filePath, FileMode.OpenOrCreate)); document.Open(); PdfPTabletable=newPdfPTable( columns.Length); foreach ( varcolumnincolumns){ PdfPCellcell=newPdfPCell( newPhrase( column,  headerFont)); cell.BackgroundColor=Color.BLACK; table.AddCell( cell); }   Then reflection is used to generate the row wise details and form the grid. foreach  (var item in customerList){ foreach ( varcolumnincolumns) { stringvalue=item.GetType() .GetProperty( column) .GetValue( item) .ToString(); PdfPCellcell5=newPdfPCell( newPhrase( value,  rowfont)); table.AddCell( cell5); } } document . Add ( table ); document . Close ();   Once the process id done the pdf table is added to the document and document is closed to write all the changes to the filepath given. Then the control moves to the controller which will take care of sending the response as a JSON result with a filename. If the file name is not given then the PDF will open in the same page otherwise a popup will open up asking whether to save the file or open file. Return File(filePath, “application/pdf”,“list.pdf”);   The final result screen is shown below. PDF file opened below to show the output. Conclusion: This is how the export pdf is done for JQGrid. The problem area that is addressed here is the clientside grid frameworks won’t support PDF’s export. In that time it’s better to have a fine grained control over the data and generated PDF. iTextSharp has helped us to achieve our goal.

    Read the article

  • Codifying a natural language requirements spec

    - by ProfK
    I have a fairly technical functionality requirements spec, expressed in English prose, produced by my project manager. It is structured as a collection of UI tabs, where the requirements for each tab are expressed as a lit of UI fields and a list of business rules for the tab. Most business rules are for UI fields on a tab, e.g: a) Must be alphanumeric, max length 20. b) Must be a dropdown, with values from table x. c) Is mandatory. d) Is mandatory under certain conditions, e.g. another field is just populated, or has a specific value. Then other business rules get a little more complex. The spec is for a job application, so the central business object (table) is the Applicant, and we have several other tables with one-to-many relationships with applicant, such as Degree, HighSchool, PreviousEmployer, Diploma, etc. e) One such complex rule says a status field can only be assigned a certain value if a many-side record exists in at least one of the many-side tables. E.g. the Applicant has at least one HighSchool or at least one Diploma record. I am looking for advice on how to codify these requirements into a more structured specification defined in terms of tables, fields, and relationships, especially for the conditional rules for fields and for the presence of related records. Any suggestions and advice will be most welcome, but I would be overjoyed if i could find an already defined system or structure for expressing things like this.

    Read the article

  • Performance Overhead of Encrypted /home

    - by SabreWolfy
    I have a netbook with Windows on the second partition and Xubuntu (/ and /home) on the third partition. I selected to encrypt my home folder during installation. The performance of the netbook is adequate for the small machine that it is, but I'm looking to improve performance. I could not find much information about the overhead (CPU or drive) associated with home partition encryption. I ran the following, writing to my home partition as well as the the mounted Windows partition: dd if=/dev/zero of=~/dummy bs=512 count=10240 dd if=/dev/zero of=/media/Windows/dummy bs=512 count=10240 The first returned 2.4MB/s and the second returned 2.5MB/s. Can I therefore deduce that there is very little overhead to home folder encryption? I'm not sure if the different filesystems will make any difference (/ and /home are ext3). Update 1 I don't know why I didn't use /tmp instead of the mounted Windows folder. Only /home is encrypted, so /tmp is unencrypted ext3. The results of the dd as above are astounding: ~: 2.4 MB/s /tmp: 42.6 MB/s Comments please? The reason I am asking this is that disk access on the netbook is noticeably slow. Update 2 I timed each of the dd operations with time: ~: real 0m2.217s user 0m0.028s sys 0m2.176s /tmp: real 0m0.152s user 0m0.012s sys 0m0.136s See also: discussion on UbuntuForums.org and bug report Edit: Output of mount: /dev/sda3 on / type ext3 (rw,noatime,errors=remount-ro,user_xattr,commit=600) proc on /proc type proc (rw,noexec,nosuid,nodev) none on /sys type sysfs (rw,noexec,nosuid,nodev) fusectl on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) none on /dev type devtmpfs (rw,mode=0755) none on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) none on /dev/shm type tmpfs (rw,nosuid,nodev) none on /var/run type tmpfs (rw,nosuid,mode=0755) none on /var/lock type tmpfs (rw,noexec,nosuid,nodev) binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,noexec,nosuid,nodev) gvfs-fuse-daemon on /home/USER/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev,user=USER) `

    Read the article

  • Notes for a NetBeans IDE 7.4 HTML5 Screencast

    - by Geertjan
    I'm making a screencast that intends to thoroughly introduce NetBeans IDE 7.4 as a tool for HTML, JavaScript, and CSS developers. Here's the current outline, additions and other suggestions are welcome. Getting Started Downloading NetBeans IDE for HTML5 and PHP Examining the NetBeans installation directory, especially netbeans.conf Examining the NetBeans user directory Command line options for starting NetBeans IDE Exploring NetBeans IDE Menus and toolbars Versioning tools Options Window Go through whole Options window Change look and feels Adding themes Syntax coloring Code templates Plugin Manager and Plugin Portal Dark Look and Feel Themes Toggle line wrap Emmet HTML Tidy NetBeans Cheat Sheets Creating HTML5 projects From scratch From online template, e.g., Twitter Bootstrap From ZIP file From folder on disk From sample Editing Useful shortcuts Alt-Enter: see the current hints Alt-Shift-DOT/COMMA: expand selection (CTRL instead of Alt on Mac) Ctrl-Shift-Up/Down: copy up/down Alt-Shift-Up/Down: move up/down Alt-Insert: generate code (Lorum Ipsum) View menu | Show Non-printable Characters Source menu Show keyboard shortcut card Useful hints Surround with Tag Remove Surrounding Tag Useful code completion Link tag for CSS, show completion Script tag for JavaScript, show completion Create code templates in Options window Useful HTML Palette items Unordered List Link Useful code navigation Navigator Navigate menu Useful project settings Project-level deployment settings CSS Preprocessors (SASS/LESS) Cordova support Useful window management Dragging, minimizing, undocking Ctrl-Shift-Enter: distraction-free mode Alt-Shift Enter: maximization Debugging JavaScript debugger Deploying Embedded browser Responsive design Inspect in NetBeans mode Chrome browser with NetBeans plugin Android and iOS browsers Cordova makes native packages On device debugging On device styling Documentation PHP and HTML5 Learning Trail: https://netbeans.org/kb/trails/php.html Contributing Social Media: Twitter, Facebook, blogs Plugin Portal Planning to complete the above screencast this week, will continue editing this page as more useful features arise in my mind or hopefully in the comments in this blog entry!

    Read the article

  • Javascript autotab function not working on iPad or iPhone [migrated]

    - by freddy6
    I have this this piece of html code: <form name="postcode" method="post" onsubmit="return OnSubmitForm();"> <input class="postcode" maxlength="1" size="1" name="c" onKeyup="autotab(this, document.postcode.o)" /> <input class="postcode" maxlength="1" size="1" name="o" onKeyup="autotab(this, document.postcode.d)" /> <input class="postcode" maxlength="1" size="1" name="d" onKeyup="autotab(this, document.postcode.e)" /> <input class="postcode" maxlength="1" size="1" name="e" /> <br /> </form> which uses this javascript: <script> /* Auto tabbing script- By JavaScriptKit.com http://www.javascriptkit.com This credit MUST stay intact for use */ function autotab(original,destination){ if (original.getAttribute&&original.value.length==original.getAttribute("maxlength")) destination.focus() } </script><script src="http://ajax.googleapis.com/ajax/libs/jquery/1.4/jquery.min.js" type="text/javascript"></script> <script src="js/scripts.js" type="text/javascript"></script> <script type="text/javascript"> function OnSubmitForm() { if(document.postcode.operation[0].checked == true) { document.postcode.action ="plans.php"; } else if(document.postcode.operation[1].checked == true) { document.postcode.action ="plans_gas.php"; } else if(document.postcode.operation[2].checked == true) { document.postcode.action ="plans_duel.php"; } return true; } </script> As soon a you enter in one character into one of the text boxes it automatically tabs across the the next text box. This works fine on a pc or mac and on safari and also in all other browsers. But when viewing the webpage on an iPad or iPhone (using safari) the auto tabbing function does not work. Any ideas on how to make the auto tab work on these mobile devices?

    Read the article

  • Error when upgrading initscripts using dist-upgrade on a cd image using uck

    - by InkBlend
    I was using UCK to customize an Ubuntu 11.10 image, and ran a dist-upgrade on it (from the console) to try to update all of the packages on it. The upgrade worked successfully for all but two packages, so I tried again and got the same error message. This is what happened: # sudo apt-get dist-upgrade Reading package lists... Done Building dependency tree Reading state information... Done Calculating upgrade... Done 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? y Setting up initscripts (2.88dsf-13.10ubuntu4.1) ... guest environment detected: Linking /run/shm to /dev/shm rmdir: failed to remove `/run/shm': Device or resource busy Can't symlink /run/shm to /dev/shm; please fix manually. dpkg: error processing initscripts (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of ifupdown: ifupdown depends on initscripts (>= 2.88dsf-13.3); however: Package initscripts is not configured yet. dpkg: error processing ifupdown (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already Errors were encountered while processing: initscripts ifupdown E: Sub-process /usr/bin/dpkg returned an error code (1) # How can I fix this? Is the problem with the ISO, dpkg, or can I just not upgrade some packages with UCK? I am using Ubuntu 11.10 to and UCK 2.4.5 to customize an Ubuntu 11.10 image.

    Read the article

  • What are Information Centers?

    - by user12244613
    Information Centers are similar to product pages in the Oracle Sun System Handbook Many customers like the Oracle Sun System Handbook concept of a home page with all the product attributes, troubleshooting etc. access from a single home page. This concept is now available for a range of Oracle Solaris, Systems, and Storage products. The Information Center for each product covers areas such as: Overview, Hot Topics, Patching and Maintenance. The Information Center pages are dynamically generated each night to ensure the latest content is available to you. Here are the top Solaris, Systems, and Storage Information Centers: Oracle Explorer Data Collector Oracle Solaris 10 Live Upgrade Oracle Solaris 11 Booting Information Center Oracle Solaris 11 Desktop and Graphics Information Center Oracle Solaris 11 Image Packaging System (IPS) Information Center Oracle Solaris 11 Installation Information Center Oracle Solaris 11 Product Information Center Oracle Solaris 11 Security Information Center Oracle Solaris 11 System Administration Information Center Oracle Solaris 11 Zones Information Center Oracle Solaris Crash Analysis Tool(SCAT) - Information Center Oracle Solaris Cluster Information Center Oracle Solaris Internet Protocol Multipathing (IPMP) Information Center Oracle Solaris Live Upgrade Information Center Oracle Solaris ZFS Information Center Oracle Solaris Zones Information Center CMT T1000/T2000 and Netra T2000 CMT T5120/T5120/T5140/T5220/T5240/T5440 Systems M3000/M4000/M5000/M8000/M9000-32/M9000-64 Management and Diagnostic Tools for Oracle Sun Systems Netra CT410/810 and Netra CT900 Network-Attached Storage (NAS) Oracle Explorer Data Collector Oracle VM Server for SPARC (LDoms) Pillar Axiom 600 SL3000 Tape Library Sun Disk Storage Patching and Updates Sun Fire 3800/4800/4810/6800/E2900/E4900/E6900/V1280 - Netra 1280/1290 Sun Fire 12K/15K/E20K/E25K Sun Fire X4270 M2 Server Sun x86 Servers T3 and T4 Systems Tape Domain Firmware V210/V240/V440/V215/V245/V445 Servers VSM (VTSS/VLE/VTCS)

    Read the article

  • Clusterware 11gR2 &ndash; Setting up an Active/Passive failover configuration

    - by Gilles Haro
    Oracle is providing a large range of interesting solutions to ensure High Availability of the database. Dataguard, RAC or even both configurations (as recommended by Oracle for a Maximum Available Architecture - MAA) are the most frequently found and used solutions. But, when it comes to protecting a system with an Active/Passive architecture with failover capabilities, people often thinks to other expensive third party cluster systems. Oracle Clusterware technology, which comes along at no extra-cost with Oracle Database or Oracle Unbreakable Linux, is - in the knowing of most people - often linked to Oracle RAC and therefore, is seldom used to implement failover solutions. Oracle Clusterware 11gR2  (a part of Oracle 11gR2 Grid Infrastructure)  provides a comprehensive framework to setup automatic failover configurations. It is actually possible to make "failover-able'", and then to protect, almost any kind of application (from the simple xclock to the most complex Application Server). Quoting Oracle: “Oracle Clusterware is a portable cluster software that allows clustering of single servers so that they cooperate as a single system. Oracle Clusterware also provides the required infrastructure for Oracle Real Application Clusters (RAC). In addition Oracle Clusterware enables the protection of any Oracle application or any other kind of application within a cluster.” In the next couple of lines, I will try to present the different steps to achieve this goal : Have a fully operational 11gR2 database protected by automatic failover capabilities. I assume you are fluent in installing Oracle Database 11gR2, Oracle Grid Infrastructure 11gR2 on a Linux system and that ASM is not a problem for you (as I am using it as a shared storage). If not, please have a look at Oracle Documentation. As often, I made my tests using an Oracle VirtualBox environment. The scripts are tested and functional on my system. Unfortunately, there can always be a typo or a mistake. This blog entry does not replace a course around the Clusterware Framework. I just hope it will let you see how powerful it is and that it will give you the whilst to go further with it...  Note : This entry has been revised (rev.2) following comments from Philip Newlan. Prerequisite 2 Linux boxes (OELCluster01 and OELCluster02) at the same OS level. I used OEL 5 Update 5 with an Enterprise Kernel. Shared Storage (SAN). On my VirtualBox system, I used Openfiler to simulate the SAN Oracle 11gR2 Database (11.2.0.1) Oracle 11gR2 Grid Infrastructure (11.2.0.1)   Step 1 - Install the software Using asmlib, create 3 ASM disks (ASM_CRS, ASM_DTA and ASM_FRA) Install Grid Infrastructure for a cluster (OELCluster01 and OELCluster02 are the 2 nodes of the cluster) Use ASM_CRS to store Voting Disk and OCR. Use SCAN. Install Oracle Database Standalone binaries on both nodes. Use asmca to check/mount the disk groups on 2 nodes Use dbca to create and configure a database on the primary node Let's name it DB11G. Copy the pfile, password file to the second node. Create adump directoty on the second node.   Step 2 - Setup the resource to be protected After its creation with dbca, the database is automatically protected by the Oracle Restart technology available with Grid Infrastructure. Consequently, it restarts automatically (if possible) after a crash (ex: kill -9 smon). A database resource has been created for that in the Cluster Registry. We can observe this with the command : crsctl status resource that shows and ora.dba11g.db entry. Let's save the definition of this resource, for future use : mkdir -p /crs/11.2.0/HA_scripts chown oracle:oinstall /crs/11.2.0/HA_scripts crsctl status resource ora.db11g.db -p > /crs/11.2.0/HA_scripts/myResource.txt Although very interesting, Oracle Restart is not cluster aware and cannot restart the database on any other node of the cluster. So, let's remove it from the OCR definitions, we don't need it ! srvctl stop database -d DB11G srvctl remove database -d DB11G Instead of it, we need to create a new resource of a more general type : cluster_resource. Here are the steps to achieve this : Create an action script :  /crs/11.2.0/HA_scripts/my_ActivePassive_Cluster.sh #!/bin/bash export ORACLE_HOME=/oracle/product/11.2.0/dbhome_1 export ORACLE_SID=DB11G case $1 in 'start')   $ORACLE_HOME/bin/sqlplus /nolog <<EOF   connect / as sysdba   startup EOF   RET=0   ;; 'stop')   $ORACLE_HOME/bin/sqlplus /nolog <<EOF   connect / as sysdba   shutdown immediate EOF   RET=0   ;; 'clean')   $ORACLE_HOME/bin/sqlplus /nolog <<EOF   connect / as sysdba   shutdown abort    ##for i in `ps -ef | grep -i $ORACLE_SID | awk '{print $2}' ` ;do kill -9 $i; done EOF   RET=0   ;; 'check')    ok=`ps -ef | grep smon | grep $ORACLE_SID | wc -l`    if [ $ok = 0 ]; then      RET=1    else      RET=0    fi    ;; '*')      RET=0   ;; esac if [ $RET -eq 0 ]; then    exit 0 else    exit 1 fi   This script must provide, at least, methods to start, stop, clean and check the database. It is self-explaining and contains nothing special. Just be aware that it must be runnable (+x), it runs as Oracle user (because of the ACL property - see later) and needs to know about the environment. Also make sure it exists on every node of the cluster. Moreover, as of 11.2, the clean method is mandatory. It must provide the “last gasp clean up”, for example, a shutdown abort or a kill –9 of all the remaining processes. chmod +x /crs/11.2.0/HA_scripts/my_ActivePassive_Cluster.sh scp  /crs/11.2.0/HA_scripts/my_ActivePassive_Cluster.sh   oracle@OELCluster02:/crs/11.2.0/HA_scripts Create a new resource file, based on the information we got from previous  myResource.txt . Name it myNewResource.txt. myResource.txt  is shown below. As we can see, it defines an ora.database.type resource, named ora.db11g.db. A lot of properties are related to this type of resource and do not need to be used for a cluster_resource. NAME=ora.db11g.db TYPE=ora.database.type ACL=owner:oracle:rwx,pgrp:oinstall:rwx,other::r-- ACTION_FAILURE_TEMPLATE= ACTION_SCRIPT= ACTIVE_PLACEMENT=1 AGENT_FILENAME=%CRS_HOME%/bin/oraagent%CRS_EXE_SUFFIX% AUTO_START=restore CARDINALITY=1 CHECK_INTERVAL=1 CHECK_TIMEOUT=600 CLUSTER_DATABASE=false DB_UNIQUE_NAME=DB11G DEFAULT_TEMPLATE=PROPERTY(RESOURCE_CLASS=database) PROPERTY(DB_UNIQUE_NAME= CONCAT(PARSE(%NAME%, ., 2), %USR_ORA_DOMAIN%, .)) ELEMENT(INSTANCE_NAME= %GEN_USR_ORA_INST_NAME%) DEGREE=1 DESCRIPTION=Oracle Database resource ENABLED=1 FAILOVER_DELAY=0 FAILURE_INTERVAL=60 FAILURE_THRESHOLD=1 GEN_AUDIT_FILE_DEST=/oracle/admin/DB11G/adump GEN_USR_ORA_INST_NAME= GEN_USR_ORA_INST_NAME@SERVERNAME(oelcluster01)=DB11G HOSTING_MEMBERS= INSTANCE_FAILOVER=0 LOAD=1 LOGGING_LEVEL=1 MANAGEMENT_POLICY=AUTOMATIC NLS_LANG= NOT_RESTARTING_TEMPLATE= OFFLINE_CHECK_INTERVAL=0 ORACLE_HOME=/oracle/product/11.2.0/dbhome_1 PLACEMENT=restricted PROFILE_CHANGE_TEMPLATE= RESTART_ATTEMPTS=2 ROLE=PRIMARY SCRIPT_TIMEOUT=60 SERVER_POOLS=ora.DB11G SPFILE=+DTA/DB11G/spfileDB11G.ora START_DEPENDENCIES=hard(ora.DTA.dg,ora.FRA.dg) weak(type:ora.listener.type,uniform:ora.ons,uniform:ora.eons) pullup(ora.DTA.dg,ora.FRA.dg) START_TIMEOUT=600 STATE_CHANGE_TEMPLATE= STOP_DEPENDENCIES=hard(intermediate:ora.asm,shutdown:ora.DTA.dg,shutdown:ora.FRA.dg) STOP_TIMEOUT=600 UPTIME_THRESHOLD=1h USR_ORA_DB_NAME=DB11G USR_ORA_DOMAIN=haroland USR_ORA_ENV= USR_ORA_FLAGS= USR_ORA_INST_NAME=DB11G USR_ORA_OPEN_MODE=open USR_ORA_OPI=false USR_ORA_STOP_MODE=immediate VERSION=11.2.0.1.0 I removed database type related entries from myResource.txt and modified some other to produce the following myNewResource.txt. Notice the NAME property that should not have the ora. prefix Notice the TYPE property that is not ora.database.type but cluster_resource. Notice the definition of ACTION_SCRIPT. Notice the HOSTING_MEMBERS that enumerates the members of the cluster (as returned by the olsnodes command). NAME=DB11G.db TYPE=cluster_resource DESCRIPTION=Oracle Database resource ACL=owner:oracle:rwx,pgrp:oinstall:rwx,other::r-- ACTION_SCRIPT=/crs/11.2.0/HA_scripts/my_ActivePassive_Cluster.sh PLACEMENT=restricted ACTIVE_PLACEMENT=0 AUTO_START=restore CARDINALITY=1 CHECK_INTERVAL=10 DEGREE=1 ENABLED=1 HOSTING_MEMBERS=oelcluster01 oelcluster02 LOGGING_LEVEL=1 RESTART_ATTEMPTS=1 START_DEPENDENCIES=hard(ora.DTA.dg,ora.FRA.dg) weak(type:ora.listener.type,uniform:ora.ons,uniform:ora.eons) pullup(ora.DTA.dg,ora.FRA.dg) START_TIMEOUT=600 STOP_DEPENDENCIES=hard(intermediate:ora.asm,shutdown:ora.DTA.dg,shutdown:ora.FRA.dg) STOP_TIMEOUT=600 UPTIME_THRESHOLD=1h Register the resource. Take care of the resource type. It needs to be a cluster_resource and not a ora.database.type resource (Oracle recommendation) .   crsctl add resource DB11G.db  -type cluster_resource -file /crs/11.2.0/HA_scripts/myNewResource.txt Step 3 - Start the resource crsctl start resource DB11G.db This command launches the ACTION_SCRIPT with a start and a check parameter on the primary node of the cluster. Step 4 - Test this We will test the setup using 2 methods. crsctl relocate resource DB11G.db This command calls the ACTION_SCRIPT  (on the two nodes)  to stop the database on the active node and start it on the other node. Once done, we can revert back to the original node, but, this time we can use a more "MS$ like" method :Turn off the server on which the database is running. After short delay, you should observe that the database is relocated on node 1. Conclusion Once the software installed and the standalone database created (which is a rather common and usual task), the steps to reach the objective are quite easy : Create an executable action script on every node of the cluster. Create a resource file. Create/Register the resource with OCR using the resource file. Start the resource. This solution is a very interesting alternative to licensable third party solutions. References Clusterware 11gR2 documentation Oracle Clusterware Resource Reference Clusterware for Unbreakable Linux Using Oracle Clusterware to Protect A Single Instance Oracle Database 11gR1 (to have an idea of complexity) Oracle Clusterware on OTN   Gilles Haro Technical Expert - Core Technology, Oracle Consulting   

    Read the article

  • Cannot install extensions required for GNOME Shell themes

    - by Soham Chowdhury
    I keep getting this output: soham@fortress:~$ sudo apt-get install gnome-shell-extensions gnome-tweak-tool Reading package lists... Done Building dependency tree Reading state information... Done gnome-tweak-tool is already the newest version. The following NEW packages will be installed: gnome-shell-extensions 0 upgraded, 1 newly installed, 0 to remove and 43 not upgraded. 1 not fully installed or removed. Need to get 0 B/121 kB of archives. After this operation, 849 kB of additional disk space will be used. Do you want to continue [Y/n]? y (Reading database ... 179291 files and directories currently installed.) Unpacking gnome-shell-extensions (from .../gnome-shell-extensions_3.4.1~git20120508.dfd7191a-0ubuntu1~12.04~ricotz0_all.deb) ... dpkg: error processing /var/cache/apt/archives/gnome-shell-extensions_3.4.1~git20120508.dfd7191a-0ubuntu1~12.04~ricotz0_all.deb (--unpack): trying to overwrite '/usr/share/locale/lv/LC_MESSAGES/gnome-shell-extensions.mo', which is also in package gnome-shell-extensions-common 3.2.0-0ubuntu1~oneiric1 No apport report written because MaxReports is reached already dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Errors were encountered while processing: /var/cache/apt/archives/gnome-shell-extensions_3.4.1~git20120508.dfd7191a-0ubuntu1~12.04~ricotz0_all.deb E: Sub-process /usr/bin/dpkg returned an error code (1) Update: Fixed that. Now GNOME Tweak Tool shows me an exclamation mark beside the extension enable button, saying "Extension doesn't support shell version". My GNOME shell is already the latest version. Help!

    Read the article

  • Problem installing Ubuntu One in Windows XP

    - by Garry
    I've just had to reinstall my Ubuntu One client for Windows XP Service Pack 3 and whenever I do it uninstalls my previous attempt then comed up with the following error message c:\Program Files/ubuntuone\dist\ubuntuone-control-panel-qt.exe This application has failed to start because the application configuration is incorrect. Reinstalling this application may fix this problem. The only option I have is "ok" then nothing happens. I've tried downloading Ubuntu One several times and tried Windows 2000 compatibility mode, Administrator mode and nothing. The only thing that I make happen is for Windows NT 5 mode, a small box appears with nothing showing and nothing else. I really need Ubuntu 1 on my computer as it has everything on it so I'm wondering if I can do anything to make it install or if it's a lost cause. The last time I tried to install U1 on Windows XP I had the same problem but after about 10 tries it installed. I've tried about 30 times now and nothing. My computer is a Dell OptiPlex SX280 with 2gb ram and 340GB Hard disk Intel Pentium 4 HT Processor

    Read the article

  • How to Handle frame rates and synchronizing screen repaints

    - by David Kroukamp
    I would first off say sorry if the title is worded incorrectly. Okay now let me give the scenario I'm creating a 2 player fighting game, An average battle will include a Map (moving/still) and 2 characters (which are rendered by redrawing a varying amount of sprites one after the other). Now at the moment I have a single game loop limiting me to a set number of frames per second (using Java): Timer timer = new Timer(0, new AbstractAction() { @Override public void actionPerformed(ActionEvent e) { long beginTime; //The time when the cycle begun long timeDiff; //The time it took for the cycle to execute int sleepTime; //ms to sleep (< 0 if we're behind) int fps = 1000 / 40; beginTime = System.nanoTime() / 1000000; //execute loop to update check collisions and draw gameLoop(); //Calculate how long did the cycle take timeDiff = System.nanoTime() / 1000000 - beginTime; //Calculate sleep time sleepTime = fps - (int) (timeDiff); if (sleepTime > 0) {//If sleepTime > 0 we're OK ((Timer)e.getSource()).setDelay(sleepTime); } } }); timer.start(); in gameLoop() characters are drawn to the screen ( a character holds an array of images which consists of their current sprites) every gameLoop() call will change the characters current sprite to the next and loop if the end is reached. But as you can imagine if a sprite is only 3 images in length than calling gameLoop() 40 times will cause the characters movement to be drawn 40/3=13 times. This causes a few minor anomilies in the sprited for some charcters So my question is how would I go about delivering a set amount of frames per second in when I have 2 characters on screen with varying amount of sprites?

    Read the article

  • WCF Versioning, Naming and Endpoint URL

    - by Vinothkumar VJ
    I have a WCF Service and a Main Lib1. Say, I have a Save Profile Service. WCF gets data (with predefined data contract) from client and pass the same to the Main Class Lib1, generate response and send it back to client. WCF Method : SaveProfile(ProfileDTO profile) Current Version 1.0 ProfileDTO have the following UserName Password FirstName DOB (In string yyyy-mm-dd) CreatedDate (In string yyyy-mm-dd) Next Version (V2.0) ProfileDTO have the following UserName Password FirstName DOB (In UnixTimeStamp) CreatedDate (In UnixTimeStamp) Version 3.0 ProfileDTO have the following (With change in UserName and Password length validation) UserName Password FirstName DOB (In UnixTimeStamp) CreatedDate (In UnixTimeStamp) In simple we have DataContract and Workflow change between each version 1. How do I name the methods in WCF Service and Main Class Lib1? 2. Do I have to go with any specific pattern for ease development and maintenance? 3. Do I have to have different endpoints for different version? In the above example I have a method named “SaveProfile”. Do I have to name the methods like “SaveProfile1.0”, “SaveProfile2.0”, etc. If that is the case when there is no change between Version “3.0” and “4.0” then there will difficult in maintenance. I’m looking for a approach that will help in ease maintenance

    Read the article

< Previous Page | 512 513 514 515 516 517 518 519 520 521 522 523  | Next Page >