Search Results

Search found 22986 results on 920 pages for 'allocation unit size'.

Page 338/920 | < Previous Page | 334 335 336 337 338 339 340 341 342 343 344 345  | Next Page >

  • Sound card not detected in 13.04

    - by Ganessh Kumar R P
    I have a problem with my sound card. I don't have volume up or down option anywhere. In the setting -> Sound I don't have any card detected. But when I run the command sudo aplay -l, I get the following output **** List of PLAYBACK Hardware Devices **** Failed to create secure directory (/home/ganessh/.config/pulse): Permission denied card 0: MID [HDA Intel MID], device 0: STAC92xx Analog [STAC92xx Analog] Subdevices: 0/1 Subdevice #0: subdevice #0 card 1: NVidia [HDA NVidia], device 3: HDMI 0 [HDMI 0] Subdevices: 1/1 Subdevice #0: subdevice #0 card 1: NVidia [HDA NVidia], device 7: HDMI 0 [HDMI 0] Subdevices: 1/1 Subdevice #0: subdevice #0 card 1: NVidia [HDA NVidia], device 8: HDMI 0 [HDMI 0] Subdevices: 1/1 Subdevice #0: subdevice #0 card 1: NVidia [HDA NVidia], device 9: HDMI 0 [HDMI 0] Subdevices: 1/1 Subdevice #0: subdevice #0 And the command lspci -v | grep -A7 -i "audio" outputs 00:1b.0 Audio device: Intel Corporation 5 Series/3400 Series Chipset High Definition Audio (rev 06) Subsystem: Dell Device 02a2 Flags: bus master, fast devsel, latency 0, IRQ 48 Memory at f0f20000 (64-bit, non-prefetchable) [size=16K] Capabilities: <access denied> Kernel driver in use: snd_hda_intel 00:1c.0 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 1 (rev 06) (prog-if 00 [Normal decode]) -- 02:00.1 Audio device: NVIDIA Corporation GF106 High Definition Audio Controller (rev a1) Subsystem: Dell Device 02a2 Flags: bus master, fast devsel, latency 0, IRQ 17 Memory at d3efc000 (32-bit, non-prefetchable) [size=16K] Capabilities: <access denied> Kernel driver in use: snd_hda_intel 07:00.0 Network controller: Intel Corporation Ultimate N WiFi Link 5300 So, I assume that the drivers are properly installed but still I don't get any option in the settings or volume control. The same card used to work well back in 2010 versions(04 and 10) Any help is appreciated. Thanks

    Read the article

  • HELP! Free space not reclaimed after online resizing ext4 in Ubuntu 9.10

    - by TiansHUo
    My root partition was filling up, with only 500 mbs left, I wanted to resize my root partition from 20 Gb to 40Gb So I resized my partition by using these steps: Using Gparted to resize another partition to give space for the EXT4 Using fdisk, deleting the root partition (on /dev/sda2), and creating it again using the new size resize2fs /dev/sda2 Updating grub2 But now the problem is that although I can boot in my new partition and the new partition shows it is 40Gb, but the free size was still 500mb. So I booted from a LiveCD and checked with e2fsck -p /dev/sda2, it reported clean. So I added the -f flag (force check), still, the drive is full.

    Read the article

  • VirtualBox: VBoxManage modifyhd hosting on mac os x resize not supported

    - by dwstein
    I am a complete newbie. i'm hosting VM on OS X using virtualbox. I'm trying to resize the virtual hard drive by using the following command in the terminal. VBoxManage modifyhd "<absolute path including name and extension>" --resize 20480 I used a disk size of 25480 (i'm not really sure how to pick the correct size. and I got the following error: Progress state: VBOX_E_NOT_SUPPORTED VBoxManage: error: Resize hard disk operation for this format is not implemented yet! virtualbox version 4.1.18 I don't really even know what to ask. What am I doing wrong?

    Read the article

  • How to debug lack of sound in Asus EEE PC

    - by Kalmar
    I have an Asus EEE PC 1225B with fresh Lubuntu 12.04. And no sound. It doesn't seem to be some common problem, so I have to make some research what's up. I tried running alsamixer, so I know I have Realtek ALC269VB with nothing muted unexpectedly. What can I do next to identify and solve the problem? Additional info: alsamixer shows two cards: HD-Audio Generic and HDA ATI-SB (Realtek ALC269VB); the first one is muted. ~$ aplay ALSA lib pcm_dmix.c:1018:(snd_pcm_dmix_open) unable to open slave aplay: main:682: blad otwierania audio: Nie ma takiego pliku ani katalogu The Polish part can be translated as "error opening audio: There is no such file or directory". ~$ sudo lspci -v | grep -A7 -i "audio" 00:01.1 Audio device: Advanced Micro Devices [AMD] nee ATI Wrestler HDMI Audio [Radeon HD 6250/6310] Subsystem: ASUSTeK Computer Inc. Device 103b Flags: bus master, fast devsel, latency 0, IRQ 44 Memory at feb44000 (32-bit, non-prefetchable) [size=16K] Capabilities: [50] Power Management version 3 Capabilities: [58] Express Root Complex Integrated Endpoint, MSI 00 Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+ Capabilities: [100] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?> -- 00:14.2 Audio device: Advanced Micro Devices [AMD] nee ATI SBx00 Azalia (Intel HDA) (rev 40) Subsystem: ASUSTeK Computer Inc. Device 103b Flags: bus master, slow devsel, latency 32, IRQ 16 Memory at feb40000 (64-bit, non-prefetchable) [size=16K] Capabilities: [50] Power Management version 2 Kernel driver in use: snd_hda_intel Kernel modules: snd-hda-intel

    Read the article

  • What can I do to utilize all my hard disk space?

    - by Twatcher
    I had windows XP running on my computer. Then I installed Ubuntu from under windows. Then I decided I wanted to have only Ubuntu also because I got a system message that I am out of disk space. I loaded up my system from a live Ubuntu DVD and deleted the partition with windows on it and also the other partition that had my data on it. I expanded the partition which I thought to be the system partition (since there was no other partition left It had ext format. After that Ubuntu was working fine and I thought I have enough disk space, since my harddrive is an 80 GB ATA Maxtor. I left a small partition as backup. But after downloading a small amount of files I got the message again, that I am running out of disk space. I don't now. How can UI make my disk space bigger? I am not used to Ubuntu's file system, and I don't have the overview on how I can actually see how much space there is left for me to use. I have basically now 1 partition with the system on it and one small backup (as far as I understand). My system is (from system utility) Ubuntu 12.04 LS 3,9 GB Intel Core 2 2,4 Ghz 80 GB ATA Maxtor Here are the results for sudo fdisk -l Disk /dev/sda: 80.0 GB, 79998918144 bytes<br> 255 heads, 63 sectors/track, 9725 cylinders, total 156247887 sectors<br> Units = sectors of 1 * 512 = 512 bytes<br> Sector size (logical/physical): 512 bytes / 512 bytes<br> I/O size (minimum/optimal): 512 bytes / 512 bytes<br> Disk identifier: 0x41ab2316<br> Device Boot Start End Blocks Id System<br> /dev/sda1 * 63 123750399 61875168+ 7 HPFS/NTFS/exFAT<br> /dev/sda2 123750400 156246015 16247808 7 HPFS/NTFS/exFAT<br>

    Read the article

  • wireless blocked after installing ubuntu 12.04

    - by Cornelia Frank
    I am using a lenovo S10-3 ideapad; had no problems with earlier version of ubuntu, only since installing 12.04. Have looked through many of the questions on the same issue and tried potential solutions but cannot seem to solve my problem. The hardware switch is in 'on' position and the wireless light comes on very briefly (2-3 sec) when the laptop starts up but then goes off and stays off. Pressing FN+F5 does nothing at all. I'd be grateful for any assistance. Cornelia Have received the following responses in Terminal: cf@cf-Lenovo:~$ rfkill list all 0: ideapad_wlan: Wireless LAN Soft blocked: no Hard blocked: no 1: ideapad_bluetooth: Bluetooth Soft blocked: no Hard blocked: no 2: phy0: Wireless LAN Soft blocked: no Hard blocked: yes cf@cf-Lenovo:~$ iwconfig lo no wireless extensions. wlan0 IEEE 802.11bgn ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=off Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off eth0 no wireless extensions. cf@cf-Lenovo:~$ lshw -C network WARNING: you should run this program as super-user. *-network description: Ethernet interface product: RTL8101E/RTL8102E PCI Express Fast Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:05:00.0 logical name: eth0 version: 02 serial: 00:26:9e:ee:7f:4c size: 100Mbit/s capacity: 100Mbit/s width: 64 bits clock: 33MHz capabilities: bus_master cap_list rom ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=full firmware=N/A ip=10.0.1.8 latency=0 multicast=yes port=MII speed=100Mbit/s resources: irq:43 ioport:2000(size=256) memory:f0520000-f0520fff memory:f0510000-f051ffff memory:f0540000-f055ffff *-network DISABLED description: Wireless interface product: AR9285 Wireless Network Adapter (PCI-Express) vendor: Atheros Communications Inc. physical id: 0 bus info: pci@0000:09:00.0 logical name: wlan0 version: 01 serial: c4:17:fe:f8:bc:d7 width: 64 bits clock: 33MHz capabilities: bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=ath9k driverversion=3.2.0-31-generic-pae firmware=N/A latency=0 multicast=yes wireless=IEEE 802.11bgn resources: irq:18 memory:f0100000-f010ffff WARNING: output may be incomplete or inaccurate, you should run this program as super-user.

    Read the article

  • unable to send mail from postfix on Ubuntu 12.04

    - by gilmad
    I'm trying to send an email through Google from my localhost. (via PHP5.3) But Google keeps on blocking my requests. I tried to follow the solutions given to a few similar questions, but for some reason they do not work. I followed these instructions to configure it - http://www.dnsexit.com/support/mailrelay/postfix.html Now for the config data: my main.cf file looks like that: relayhost = [smtp.gmail.com]:587 smtp_fallback_relay = [relay.google.com] smtp_sasl_auth_enable = yes smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd smtp_sasl_security_options = my sasl_passwd looks like that: [smtp.gmail.com]:587 [email protected]:password and that is how the mail.log rows look like: Dec 14 10:24:50 COMP-NAME postfix/pickup[5185]: 1C3987E0EDD: uid=33 from= Dec 14 10:24:50 COMP-NAME postfix/cleanup[5499]: 1C3987E0EDD: message-id=<[email protected] Dec 14 10:24:50 COMP-NAME postfix/qmgr[5186]: 1C3987E0EDD: from=, size=483, nrcpt=1 (queue active) Dec 14 10:24:50 COMP-NAME postfix/smtp[5501]: 1C3987E0EDD: to=, relay=smtp.gmail.com[173.194.70.109]:587, delay=0.61, delays=0.19/0/0.32/0.1, dsn=5.7.0, status=bounced (host smtp.gmail.com[173.194.70.109] said: 530 5.7.0 Must issue a STARTTLS command first. w3sm8024250eel.17 (in reply to MAIL FROM command)) Dec 14 10:24:50 COMP-NAME postfix/cleanup[5499]: C20677E0EDE: message-id=<[email protected] Dec 14 10:24:50 COMP-NAME postfix/bounce[5502]: 1C3987E0EDD: sender non-delivery notification: C20677E0EDE Dec 14 10:24:50 COMP-NAME postfix/qmgr[5186]: C20677E0EDE: from=<, size=2532, nrcpt=1 (queue active) Dec 14 10:24:50 COMP-NAME postfix/qmgr[5186]: 1C3987E0EDD: removed

    Read the article

  • Compressing/compacting messages over websocket on Node.js

    - by icelava
    We have a websocket implementation (Node.js/Sock.js) that exchanges data as JSON strings. As our use cases grow, so have the size of the data transmitted across the wire. The websocket protocol does not natively offer any compression feature, so in order to reduce the size of our messages we'd have to manually do something about the serialisation. There appear to be a variety of LZW implementations in Javascript, some which confuses me on their compatibility for in-browser use only versus transmission across the wire due to my lack of understanding on low-level encodings. More importantly, all of them seem to take a noticeable performance drag when Javascript is the engine doing the compression/decompression work, which is not desirable for mobile devices. Looking instead other forms of compact serialisation, MessagePack does not appear to have any active support in Javascript itself; BSON does not have any Javascript implementation; and an alternative BISON project that I tested does not deserialise everything back to their original values (large numbers), and it does not look like any further development will happen either. What are some other options others have explored for Node.js?

    Read the article

  • XNA 2D Spritesheet drawing rendering problem

    - by user24092
    I'm making a tile-based game, using one spritesheet containing all tile graphics. Each tile has a size of 32x32 pixels. The main problem is: when I draw the tile to the screen, if the tile position x and y are not rounded or if scale is activated in spriteBatch.Draw() method (scale != 1.0f), I get some lines of adjacent tiles on the spritesheet into the current tile drawed. I already tried setting SamplerState to PointClamp, removing AntiAlias, but still doesn't work. Here I'll show images of some tests that I made, with a test sprite sheet that I've created (I made a 9x9 spritesheet, with each sprite of size 32x32 containing a unique solid color). Tests: http://img6.imageshack.us/img6/5946/testsqj.png SpriteSheet used: http://imageshack.us/a/img821/1341/tilesm.png Already tried to remove anti-alias, set PointClamp as sampler state, but still getting this issue, XNA keeps drawing part of the adjacent pixels of the texture on the screen. What I want is to get the correct area of the tilesheet texture (as seen in the first test, that gets just the yellow pixels). My question is: Is there any way that I can fix this, WITHOUT adding tile spacing or any other modification involving the tilesheet? Maybe disabling a texture filtering that is done by XNA, or something like that.

    Read the article

  • How to efficiently dump a huge MySQL innodb database?

    - by Jagbir
    I got an Ubuntu 10.04 production MySQL database server where total size of database is 260 GB while size of root partition is itself 300 GB where DB is stored, essentially means around 96% of / is full and there's no space left for storing dump/backup etc. No other disk is attached to server as of now. My task is to migrate this database to other server sitting in different datacenter. Question is how to do that efficiently with minimum downtime? I'm thinking in line of: Request to attach an extra drive to server and take a dump in that drive. Transfer dump to new server, restore it and make new server slave of existing one to keep data in sync When migration is needed, break replication, update slave config to accept read/write requests and make old server read-only so it won't entertain any write requests and tell app developers to update there config with new IP address for db. What's your suggestions to improve this or any alternate better approach for this task?

    Read the article

  • Virtual Server 2005 R2 kungfu

    - by AngryHacker
    Does Virtual Server 2005 R2 have a command line interface, that's versatile enough? Here is a situation. I run a Win2k VM on an old memory constrained machine. I allocate it 378MB of RAM and the VM runs just fine. Once a month, inside the VM, I backup the (a very large) database, compress it using 7Zip and ftp it to the backup site (all in a script). Unfortunately the compression part takes a massive amount of RAM (far exceeding the 378MB), it goes for the paging file and brings absolutely everything to a crawl and literally takes 2-3 days, if left unattended. So to fix this, I have to shutdown the VM, give it temporarily 768MB of RAM and then the whole thing finishes in 20 minutes. So, is there a way do the following automatically from the host machine in a script? Shutdown the guest OS (I think, I got this part) Change the RAM allocation from 378 to 768 Start the guest OS again then, 1 hour later, do everything in reverse.

    Read the article

  • Possible to draw a select portion of a render target? (in XNA)

    - by TheBroodian
    I'm going to try to do this in reverse fashion and skip straight to the punch line, and then give the back story afterward: Is it possible to, after drawing a scene to a RenderTarget2D, only draw a select portion of the RenderTarget2D, if I don't want the entire thing? I'm using xTile to manage world data in my game (it's a great piece of work, colinvella [xTile's author] has made an amazing product), and for the most part it works great. xTile supports parallax effects in its layers to add some wonderful depth to 2d scenes, which was great, until I implemented a dynamic split-screen system into my game. Wanted to make a co-op game that wouldn't require players to be in close proximity to each other, so I made it so that if the players separate too far apart, the singular full-screen viewport 'snaps-apart', and is replaced by two split-screen viewports, which then smoothly transition to their respective player targets. The effect is pretty smooth aside from the part where the parallax backgrounds become skewed once the viewports split, because xTile's ratio for handling parallax effects is dependent upon viewport size. This is unfortunate, because the effect would otherwise be really snazzy, but the backgrounds become pretty heavily affected when the game goes from single-viewport to multi-viewport. So, Colinvella suggests using rendertargets to record the scene at full viewport size, and then only drawing a portion of it. But as far as I can tell, that isn't even possible? That being said, I've never even used render targets before, so I'm still learning, hence the question here.

    Read the article

  • Trendnet tew-424ub wireless not working after update 12.10

    - by dwa
    I updated packages from the software manager and now my wireless won't work. It's a Trendnet tew-424ub iwconfig says lo no wireless extensions. eth0 no wireless extensions. sudo lshw -C network: description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:02:00.0 logical name: eth0 version: 02 serial: 1c:6f:65:46:e9:d4 size: 100Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list rom ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=full ip=192.168.1.137 latency=0 link=yes multicast=yes port=MII speed=100Mbit/s resources: irq:41 ioport:de00(size=256) memory:fdaff000-fdafffff memory:fdae0000-fdaeffff memory:fda00000-fda0ffff lsusb: Bus 003 Device 002: ID 0bc2:3332 Seagate RSS LLC Expansion Bus 003 Device 003: ID 05e3:0605 Genesys Logic, Inc. USB 2.0 Hub [ednet] Bus 003 Device 006: ID 0457:0163 Silicon Integrated Systems Corp. 802.11 Wireless LAN Adapter Bus 005 Device 002: ID 046d:c00c Logitech, Inc. Optical Wheel Mouse Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 008 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 009 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 005: ID 0781:5530 SanDisk Corp. Cruzer Bus 010 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Help? I'm not sure where to start. I've been browsing forums and such for a long time and nothing I try is working.

    Read the article

  • The five steps of business intelligence adoption: where are you?

    - by Red Gate Software BI Tools Team
    When I was in Orlando and New York last month, I spoke to a lot of business intelligence users. What they told me suggested a path of BI adoption. The user’s place on the path depends on the size and sophistication of their organisation. Step 1: A company with a database of customer transactions will often want to examine particular data, like revenue and unit sales over the last period for each product and territory. To do this, they probably use simple SQL queries or stored procedures to produce data on demand. Step 2: The results from step one are saved in an Excel document, so business users can analyse them with filters or pivot tables. Alternatively, SQL Server Reporting Services (SSRS) might be used to generate a report of the SQL query for display on an intranet page. Step 3: If these queries are run frequently, or business users want to explore data from multiple sources more freely, it may become necessary to create a new database structured for analysis rather than CRUD (create, retrieve, update, and delete). For example, data from more than one system — plus external information — may be incorporated into a data warehouse. This can become ‘one source of truth’ for the business’s operational activities. The warehouse will probably have a simple ‘star’ schema, with fact tables representing the measures to be analysed (e.g. unit sales, revenue) and dimension tables defining how this data is aggregated (e.g. by time, region or product). Reports can be generated from the warehouse with Excel, SSRS or other tools. Step 4: Not too long ago, Microsoft introduced an Excel plug-in, PowerPivot, which allows users to bring larger volumes of data into Excel documents and create links between multiple tables.  These BISM Tabular documents can be created by the database owners or other expert Excel users and viewed by anyone with Excel PowerPivot. Sometimes, business users may use PowerPivot to create reports directly from the primary database, bypassing the need for a data warehouse. This can introduce problems when there are misunderstandings of the database structure or no single ‘source of truth’ for key data. Step 5: Steps three or four are often enough to satisfy business intelligence needs, especially if users are sophisticated enough to work with the warehouse in Excel or SSRS. However, sometimes the relationships between data are too complex or the queries which aggregate across periods, regions etc are too slow. In these cases, it can be necessary to formalise how the data is analysed and pre-build some of the aggregations. To do this, a business intelligence professional will typically use SQL Server Analysis Services (SSAS) to create a multidimensional model — or “cube” — that more simply represents key measures and aggregates them across specified dimensions. Step five is where our tool, SSAS Compare, becomes useful, as it helps review and deploy changes from development to production. For us at Red Gate, the primary value of SSAS Compare is to establish a dialog with BI users, so we can develop a portfolio of products that support creation and deployment across a range of report and model types. For example, PowerPivot and the new BISM Tabular model create a potential customer base for tools that extend beyond BI professionals. We’re interested in learning where people are in this story, so we’ve created a six-question survey to find out. Whether you’re at step one or step five, we’d love to know how you use BI so we can decide how to build tools that solve your problems. So if you have a sixty seconds to spare, tell us on the survey!

    Read the article

  • How do I stop cropping in iMovie?

    - by Javoid
    I'm not certain if 'crop' is the right term. I'm new to iMovie so I may have some of the terminology wrong. When I import a movie, I allow it to resize to the recommended settings instead of full size. Then I drag it from the lower event pane into the project pane. In the 'preview' pane where the movie can be played, the top and bottom of the movie are cut off a little. It's like its converting it to widescreen by cutting off the top and bottom. After I publish the movie in mobile format (it won't allow any larger) it still has the top and bottom removed. How can I stop this? I even tried importing full (original) size and it still didn't help.

    Read the article

  • Ubuntu Linux: Process swap memory and memory usage

    - by David Halter
    My Ubuntu eats more memory than the task manager is showing: sudo ps -e --format rss | awk 'BEGIN{c=0} {c+=$1} END{print c/1024}' 1043.84 free -m total used free shared buffers cached Mem: 3860 1878 1982 0 20 679 -/+ buffers/cache: 1178 2681 Swap: 2729 1035 1693 That's strange. Can someone explain this difference? But what is more important: I'd like to know how much memory a process is really using. I don't want to know the virtual memory size, but rather the resident memory plus swap of a process. I have also tried to output the format param "sz" of 'ps', but the sum of this is to high (5450 MB) (param 'size' gives 8323.45 MB). Are there any other options? I really want to use this, to determine which programs/processes are eating to much memory (and swap), to kill them, because hibernate might not be working if the swap partition is to little.

    Read the article

  • Game Messaging System Design

    - by you786
    I'm making a simple game, and have decided to try to implement a messaging system. The system basically looks like this: Entity generates message - message is posted to global message queue - messageManager notifies every object of the new message through onMessageReceived(Message msg) - if object wants, it acts on the message. The way I'm making message objects is like this: //base message class, never actually instantiated abstract class Message{ Entity sender; } PlayerDiedMessage extends Message{ int livesLeft; } Now my SoundManagerEntity can do something like this in its onMessageReceived() method public void messageReceived(Message msg){ if(msg instanceof PlayerDiedMessage){ PlayerDiedMessage diedMessage = (PlayerDiedMessage) msg; if(diedMessage.livesLeft == 0) playSound(SOUND_DEATH); } } The pros to this approach: Very simple and easy to implement The message can contain as much as information as you want, because you can just create a new Message subclass that has whatever info necessary. The cons: I can't figure out how I can recycle Message objects to a object pool, unless I have a different pool for each subclass of Message. So I have lots and lots of object creation/memory allocation over time. Can't send a message to a specific recipient, but I haven't needed that yet in my game so I don't mind it too much. What am I missing here? There must be a better implementation or some idea that I'm missing.

    Read the article

  • Accessing second hard drive

    - by Jonathan
    So I recently installed Ubuntu 10.10 64-bit on my computer. I installed it on my 60gb SSD hard drive, and in the installation it never acknowledged the existence of my second hard drive. The hard drive that I keep all my files on, and which I want to make my home folder if I can, is a Western Digital Caviar Black 1TB SATA 6Gb/s 64MB cache (WD1002FAEX). I've read the following: https://help.ubuntu.com/community/Mount but honestly cannot work out how to access the hard drive from my Ubuntu installation. I did have Windows 7 64-bit prior to installing Ubuntu. I have backed up all the files on the hard drive, but if I could just access them straight off that would be super cool. Does anyone know how I can use the second hard drive? Thank you for your help EDIT: The following directories are currently in my /dev/ folder: ati/, block/, bsg/, bus/, char/, cpu/, isk/, input/, mapper/, net/, pktcdvd/, pts/, shm/, snd/, and usb/ EDIT: Result from sudo fdisk -l Disk /dev/sda: 60.0 GB, 60022480896 bytes 255 heads, 63 sectors/track, 7297 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000d2dfd Device Boot Start End Blocks Id System /dev/sda1 * 1 6994 56174592 83 Linux /dev/sda2 6994 7298 2438145 5 Extended /dev/sda5 6994 7298 2438144 82 Linux swap / Solaris @djeykib So very close to fixing it.. unfortunately on the last command you gave it says this: $ sudo apt-get install linux-lts-backport-natty Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package linux-lts-backport-natty Checking on http://www.ubuntuupdates.org/ppas reveals that it is only available for 10.04. Looks like I'll have to unplug and re-plug hardware if I want it working still :(

    Read the article

  • Mercurial says "nothing changed", but it did. Sometimes my software is too clever.

    - by user12608033
    It seems I have found a "bug" in Mercurial. It takes a shortcut when checking for differences in tracked files. If the file's size and modification time are unchanged, it assumes its contents are unchanged: $ hg init . $ cp -p .sccs2hg/2005-06-05_00\:00\:00\,nicstat.c nicstat.c $ ls -ogE nicstat.c -rw-r--r-- 1 14722 2012-08-24 11:22:48.819451726 -0700 nicstat.c $ hg add nicstat.c $ hg commit -m "added nicstat.c" $ cp -p .sccs2hg/2005-07-02_00\:00\:00\,nicstat.c nicstat.c $ ls -ogE nicstat.c -rw-r--r-- 1 14722 2012-08-24 11:22:48.819451726 -0700 nicstat.c $ hg diff $ hg commit nothing changed $ touch nicstat.c $ hg diff diff -r b49cf59d431d nicstat.c --- a/nicstat.c Fri Aug 24 11:21:27 2012 -0700 +++ b/nicstat.c Fri Aug 24 11:22:50 2012 -0700 @@ -2,7 +2,7 @@ * nicstat - print network traffic, Kb/s read and written. Solaris 8+. * "netstat -i" only gives a packet count, this program gives Kbytes. * - * 05-Jun-2005, ver 0.81 (check for new versions, http://www.brendangregg.com) + * 02-Jul-2005, ver 0.90 (check for new versions, http://www.brendangregg.com) * [...] Now, before you agree or disagree with me on whether this is a bug, I will also say that I believe it is a feature. Yes, I feel it is an acceptable shortcut because in "real" situations an edit to a file will change the modification time by at least one second (the resolution that hg diff or hg commit is looking for). The benefit of the shortcut is greatly improved performance of operations like "hg diff" and "hg status", particularly where your repository contains a lot of files. Why did I have no change in modification time? Well, my source file was generated by a script that I have written to convert SCCS change history to Mercurial commits. If my script can generate two revisions of a file within a second, and the files are the same size, then I run afoul of this shortcut. Solution - I will just change my script to apply the modification time from the SCCS history to the file prior to commit. A "touch -t " will do that easily.

    Read the article

  • OpenGL - Rendering from part of an index and vertex array depending on an element count

    - by user1423893
    I'm currently drawing my shapes as lines by using a VAO and then assigning the dynamic vertices and indices each frame. // Bind VAO glBindVertexArray(m_vao); // Update the vertex buffer with the new data (Copy data into the vertex buffer object) glBufferData(GL_ARRAY_BUFFER, numVertices * sizeof(VertexPosition), m_vertices.data(), GL_DYNAMIC_DRAW); // Update the index buffer with the new data (Copy data into the index buffer object) glBufferData(GL_ELEMENT_ARRAY_BUFFER, numIndices * sizeof(unsigned short), indices.data(), GL_DYNAMIC_DRAW); glDrawElements(GL_LINES, numIndices, GL_UNSIGNED_SHORT, BUFFER_OFFSET(0)); // Unbind VAO glBindVertexArray(0); What I would like to do is draw the lines using only part of the data stored in the index and vertex buffer objects. The vertex buffer has its vertices set from an array of defined maximum size: std::array<VertexPosition, maxVertices> m_vertices; The index buffer has its elements set from an array of defined maximum size: std::array<unsigned short, maxIndices> indices = { 0 }; A running total is kept of the number of vertices and indices needed for each draw call numVertices numIndices Can I not specify that the buffer data contain the entire array and only read from part of it when drawing? For example using the vertex buffer object glBufferData(GL_ARRAY_BUFFER, numVertices * sizeof(VertexPosition), m_vertices.data(), GL_DYNAMIC_DRAW); m_vertices.data() = Entire array is stored numVertices * sizeof(VertexPosition) = Amount of data to read from the entire array Is this not the correct way to approach this? I do not wish to use std::vector if possible.

    Read the article

  • NTFS write speed really slow (<15MB/s)

    - by Zulakis
    I got a new Seagate 4TB harddrive formatted with ntfs using parted /dev/sda > mklabel gpt > mkpart pri 1 -1 mkfs.ntfs /dev/sda1 When copying files or testing writespeed with dd, the max writespeed I can get is about 12MB/s. The harddrive should be capable of atleast 100MB/s. top shows high cpu usage for the mount.ntfs process. The system has a AMD dualcore. This is the output of parted /dev/sda unit s print: Model: ATA ST4000DM000-1F21 (scsi) Disk /dev/sda: 7814037168s Sector size (logical/physical): 512B/4096B Partition Table: gpt Number Start End Size File system Name Flags 1 2048s 7814035455s 7814033408s pri The used kernel is 3.5.0-23-generic. The ntfs-3g versions I tried are ntfs-3g 2012.1.15AR.1 (ubuntu 12.04 default) and the newest version ntfs-3g 2013.1.13AR.2. When formatted with ext4 I get good write speeds with about 140MB/s. How can I fix the writespeed?

    Read the article

  • game play strategy in an arena

    - by joulesm
    I am writing a player's behavior for an arena game, and I'm wondering if you can offer some strategies. I'm writing it in Python, but I'm just interested in the high level game play. Here are the game aspects: Arena is a circle of a given size. The arena size shrinks every round to help break ties. Players are much smaller circles, can be on teams of 1 or 2 players. Players attack by colliding with other players, and based on the physics of the collision (speed of both players, angle), one could force another player out of the arena. Once a player is out of the arena, they are out of the game (for that round). The goal is to be the only team with players left in the arena. All other players have been pushed (through collisions or mistakes) out of the arena. It is possible for there to be no winner if the last two players exit the arena at the same time. Once the player has been programmed, the game just runs. There is no human intervention in the game. I'm thinking it's easiest to implement a few simple programmatic rules for my player to follow. For example, stay close to center of the arena, attack opponents from the inner side of the arena, etc. Are there any good simple game strategies? Would adding a random aspect to the game help? For example, to avoid predictability by the other team or something. Thanks in advance.

    Read the article

  • How to resize a ubuntu partition to make more room for windows

    - by Jeremy
    My laptop has Windows 7 and Ubuntu 13.10 installed alongside each other. My laptop has two 225GB hard drives. I give Ubuntu 133.65GB and I give Windows 87.76GB on the same hard drive (C). My problem now is that Windows is almost out of space but Ubuntu is only using a few GB of the 133.65GB that I gave it. I want to reduce Ubuntu's partition size and give that space to increase Windows partition size. Is that any program that can to do this?

    Read the article

  • Dealing with Fine-Grained Cache Entries in Coherence

    - by jpurdy
    On occasion we have seen significant memory overhead when using very small cache entries. Consider the case where there is a small key (say a synthetic key stored in a long) and a small value (perhaps a number or short string). With most backing maps, each cache entry will require an instance of Map.Entry, and in the case of a LocalCache backing map (used for expiry and eviction), there is additional metadata stored (such as last access time). Given the size of this data (usually a few dozen bytes) and the granularity of Java memory allocation (often a minimum of 32 bytes per object, depending on the specific JVM implementation), it is easily possible to end up with the case where the cache entry appears to be a couple dozen bytes but ends up occupying several hundred bytes of actual heap, resulting in anywhere from a 5x to 10x increase in stated memory requirements. In most cases, this increase applies to only a few small NamedCaches, and is inconsequential -- but in some cases it might apply to one or more very large NamedCaches, in which case it may dominate memory sizing calculations. Ultimately, the requirement is to avoid the per-entry overhead, which can be done either at the application level by grouping multiple logical entries into single cache entries, or at the backing map level, again by combining multiple entries into a smaller number of larger heap objects. At the application level, it may be possible to combine objects based on parent-child or sibling relationships (basically the same requirements that would apply to using partition affinity). If there is no natural relationship, it may still be possible to combine objects, effectively using a Coherence NamedCache as a "map of maps". This forces the application to first find a collection of objects (by performing a partial hash) and then to look within that collection for the desired object. This is most naturally implemented as a collection of entry processors to avoid pulling unnecessary data back to the client (and also to encapsulate that logic within a service layer). At the backing map level, the NIO storage option keeps keys on heap, and so has limited benefit for this situation. The Elastic Data features of Coherence naturally combine entries into larger heap objects, with the caveat that only data -- and not indexes -- can be stored in Elastic Data.

    Read the article

< Previous Page | 334 335 336 337 338 339 340 341 342 343 344 345  | Next Page >