Search Results

Search found 20671 results on 827 pages for 'android device'.

Page 540/827 | < Previous Page | 536 537 538 539 540 541 542 543 544 545 546 547  | Next Page >

  • lirc_zilog IR transmission no longer working with HD-PVR on 12.04

    - by johnf
    I have been running a ubuntu 10.04 with a patched version of lirc_zilog for two years. I upgraded to 12.04 and lirc_zilog is no longer working with my HD-PVR. The MythTV wiki reports that it did work out of the box with 11.04. The error message I get on irsend is as follows: johnf@carbon:~$ /usr/local/bin/irsend SEND_ONCE blaster 0_130_KEY_POWER irsend: command failed: SEND_ONCE blaster 0_130_KEY_POWER irsend: hardware does not support sending The lircd daemon, run interactively, reports the following: lircd: accepted new client on /var/run/lirc/lircd lircd: could not get hardware features lircd: this device driver does not support the LIRC ioctl interface lircd: major number of /dev/lirc0 is 250 lircd: LIRC major number is 61 lircd: check if /dev/lirc0 is a LIRC device lircd: WARNING: Failed to initialize hardware lircd: error processing command: SEND_ONCE blaster 0_130_KEY_POWER lircd: hardware does not support sending lircd: removed client Checking dmesg seems to indicate that the kernel module is loading properly: [56497.730743] lirc_zilog: module is from the staging directory, the quality is unknown, you have been warned. [56497.730999] lirc_zilog: Zilog/Hauppauge IR driver initializing [56497.732484] lirc_zilog: ir_probe: ir_rx_z8f0811_hdpvr on i2c-0 (Hauppage HD PVR I2C), client addr=0x71 [56497.732493] lirc_zilog: ir_probe: ir_tx_z8f0811_hdpvr on i2c-0 (Hauppage HD PVR I2C), client addr=0x70 [56497.732496] lirc_zilog: probing IR Tx on Hauppage HD PVR I2C (i2c-0) [56497.756822] lirc_zilog: firmware of size 302355 loaded [56497.756945] lirc_zilog: 743 IR blaster codesets loaded [56497.757030] i2c i2c-0: lirc_dev: driver lirc_zilog registered at minor = 0 [56497.757033] lirc_zilog: IR unit on Hauppage HD PVR I2C (i2c-0) registered as lirc0 and ready [56497.757035] lirc_zilog: probe of IR Tx on Hauppage HD PVR I2C (i2c-0) done [56497.757056] lirc_zilog: initialization complete Here is my /etc/lirc/hardware.conf #Chosen IR Transmitter TRANSMITTER="HD-PVR" TRANSMITTER_MODULES="lirc_dev lirc_zilog" TRANSMITTER_DRIVER="" TRANSMITTER_DEVICE="/dev/lirc0" TRANSMITTER_SOCKET="" TRANSMITTER_LIRCD_CONF="" TRANSMITTER_LIRCD_ARGS="" My lircd.conf is a copy of the recommended one. Examination of the kernel source seems to indicate that the lirc_zilog module should support transmission, it's newer than the patched version I was manually compiling on 10.04. I was previously using a manually built version of lirc 0.8.7 and not the packaged one. I'm now running the packaged version 9.0. I can provide any additional information required and will perform tests quickly. I'm very eager to get this issue resolved.

    Read the article

  • How to install Wacom Bamboo Pen

    - by casadrya
    I have a new Wacom Bamboo Pen. I'm using Ubuntu 10.10 64bit. After googling a little bit, I checked that xserver-xorg-input-wacom was installed. I plugged in my tablet. I rebooted my computer. Nothing special happened. I opened Inkscape. The tablet didn't work. I opened Inkscape's Input devices dialog. I didn't understand anything. I tried to blindly click some options in that dialog but nothing seemed to have any effect. Same with Gimp. After googling some more I found the linuxwacom website with source code, this didn't seem to work. So... any help? As requested: lsusb Bus 005 Device 002: ID 056a:00d4 Wacom Co., Ltd dmesg | tail [ 492.961267] usb 5-1: new full speed USB device using uhci_hcd and address 3 [ 493.144862] input: Wacom Bamboo 4x5 Pen as /devices/pci0000:00/0000:00:1a.2/usb5/5-1/5-1:1.0/input/input6 [ 493.158854] input: Wacom Bamboo 4x5 Finger as /devices/pci0000:00/0000:00:1a.2/usb5/5-1/5-1:1.1/input/input7

    Read the article

  • Farmyard

    - by Richard Jones
    Moooooooo     For a while now we’ve been using Apple’s enterprise device app distribution mechanism.   This allows you to have a user, click on a URL on their iOS device and it pulls down a new version of an enterprise app. of of our servers. Its really nice,  have a look at - http://developer.apple.com/library/ios/#featuredarticles/FA_Wireless_Enterprise_App_Distribution/Introduction/Introduction.html   I’ve embedded this, into a check on application launch, that a web-service is called to detect a newer version of the software is available.  It then calls the URL to the App and a new version is deployed. You can alert users that a new App update is available by sending them a push notification.  See screenshot at the top. We send our push notifications out to users,  using a simple C# service.    The fun part is this.   You can instruct the push notification to play a sound (embedded in the app already). So our push notification’s play a random farmyard noise, i.e from a selection of - cow.wav dogbrk.wav duck.wav goose.wav horse.wav lamb.wav monkey.wav – left field I know rooster.wav Imagine my amusement being able to periodically send out an update and watch our office (of about 60 people) turn into farm for a few seconds. I’ve messed up a few times, with people being interrupted on customer conference calls,  but people seem good humoured about it. (so far) Simple(ish) pleasures…

    Read the article

  • Mobile cross-platform SDK for computationally intensive apps

    - by K.Steff
    I am aware of the PhoneGap toolkit for creating mobile applications for virtually all mobile platforms with a significant market share. However, the code in PhoneGap that is shared between the different platforms is written in JavaScript. While I like JS, I think it's hardly appropriate for computationally intensive tasks. The situation with Titanium is pretty much the same. So, is there any way that I can create a cross-platform mobile app that has the computationally intensive code shared between the platforms? Some context: Obviously, I don't want to implement the time consuming algorithm in many different languages, since this violates DRY, increases the chance for bugs slipping in at least one version and require boilerplate code to work. I've looked at Xamarin's MonoTouch and Mono for Android tools, but while they cover iOS and Android, they're not nearly as versatile for deployment as PhoneGap. On the other hand, (IMO) the statically typed nature of C# is more suited for intense computation than JS. Are there any other SDK/tools appropriate for the task that I don't know about or a point about the mentioned above that I've missed? Also, uploading data to a web service for processing is not an option, because of the traffic required.

    Read the article

  • Ubuntu hangs after replugging the modem

    - by Iftekhar Ahmed Shafi
    I am facing this issue on Ubuntu 12.04, which is for my Wimax modem with beceem chipset. If I replug the device Ubuntu goes in hang state. Its happens very often. If I plug the USB and restart it works most of the time. But it is very annoying to restart every time it hangs. The modem works smoothly in Windows 7. And it used to work OK in Ubuntu 11.10. Jun 13 19:25:54 iftekhar-HP-520-Notebook-PC dbus[1102]: [system] Activating service name='org.freedesktop.UDisks' (using servicehelper) Jun 13 19:25:54 iftekhar-HP-520-Notebook-PC dbus[1102]: [system] Successfully activated service 'org.freedesktop.UDisks' Jun 13 19:26:00 iftekhar-HP-520-Notebook-PC goa[2168]: goa-daemon version 3.4.0 starting [main.c:112, main()] Jun 13 19:26:47 iftekhar-HP-520-Notebook-PC udevd[432]: timeout 'cdrom_id --lock-media /dev/sr1' Jun 13 19:26:48 iftekhar-HP-520-Notebook-PC udevd[432]: timeout: killing 'cdrom_id --lock-media /dev/sr1' [1072] Jun 13 19:26:49 iftekhar-HP-520-Notebook-PC kernel: [ 85.820162] sr 2:0:0:0: Device offlined - not ready after error recovery Jun 13 19:26:49 iftekhar-HP-520-Notebook-PC udevd[432]: timeout: killing 'cdrom_id --lock-media /dev/sr1' [1072] Jun 13 19:27:20 udevd[432]: last message repeated 31 times Jun 13 19:27:25 udevd[432]: last message repeated 4 times What can I do to avoid this hangs and restarts?

    Read the article

  • How to make a btrfs snapshot?

    - by MountainX
    My /home partition consists of an entire physical disk. It is formatted as btrfs. I want to snapshot it. I'm confused regarding subvolume naming, in particular. I am aware that there are similar questions, but each similar question seems to be asking something different from what I'm asking (and they are older, which means probably outdated, given the rapid development of btrfs). For example, the answer to this question is apparently not the answer to my question because my /home partition is a separate volume and the man page for btrfs shows a different command for creating snapshots now. another similar problem, no solid solution. someone else as confused as me on the naming issues My question: Starting simple: is this the correct command to take a simple snapshot of my home partition? btrfs subvolume snapshot /home/@home /home/@home_snapshot_20120421 I got really brave and tested it and it does not work. The error is error accessing /home/@home. As shown below, @home is listed. I'm obviously confused on subvolume names. Do I need to use them in creating snapshots? Some examples show taking snapshots of home using /home as the source parameter, but based on examples of root volumes, it seems to me that I need to use /home/@home. Would this command work? And if not, why? btrfs subvolume snapshot /home /home/@home_snapshot_20120421 Is the @ just a naming convention? Is it meaningful at all? Here's some output that may be relevant: btrfs subvolume list /home ID 256 top level 5 path @home I'm not sure what that means, exactly. When I try btrfs device scan it gives an error (e.g. unable to scan the device /dev/sda1). My file system doesn't have any errors. Everything is fine.

    Read the article

  • External USB hard-drive changing drive letter

    - by Sydius
    I have a Seagate FreeAgent Go external USB hard drive that was mounted but mysteriously decided to reconnect itself: Sep 30 15:07:06 feinman kernel: [243901.551604] usb 1-1.2: USB disconnect, device number 3 Sep 30 15:07:06 feinman kernel: [243901.553828] sd 6:0:0:0: [sdb] Synchronizing SCSI cache Sep 30 15:07:06 feinman kernel: [243901.553893] sd 6:0:0:0: [sdb] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK Sep 30 15:07:10 feinman kernel: [243905.336557] usb 1-1.2: new high-speed USB device number 4 using ehci_hcd Sep 30 15:07:10 feinman kernel: [243905.431219] scsi7 : usb-storage 1-1.2:1.0 Sep 30 15:07:11 feinman kernel: [243906.427207] scsi 7:0:0:0: Direct-Access Seagate FreeAgent Go 0148 PQ: 0 ANSI: 4 Sep 30 15:07:11 feinman kernel: [243906.428303] sd 7:0:0:0: Attached scsi generic sg1 type 0 Sep 30 15:07:11 feinman kernel: [243906.430317] sd 7:0:0:0: [sdc] 625142447 512-byte logical blocks: (320 GB/298 GiB) Sep 30 15:07:11 feinman kernel: [243906.430860] sd 7:0:0:0: [sdc] Write Protect is off Sep 30 15:07:11 feinman kernel: [243906.430865] sd 7:0:0:0: [sdc] Mode Sense: 1c 00 00 00 Sep 30 15:07:11 feinman kernel: [243906.431386] sd 7:0:0:0: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 30 15:07:11 feinman kernel: [243906.493674] sdc: sdc1 Sep 30 15:07:11 feinman kernel: [243906.496109] sd 7:0:0:0: [sdc] Attached SCSI disk It changed from sdb to sdc, causing a number of problems for me. What can I do to further track down the cause? I thought it might be a problem with it sleeping but when I cat /sys/class/scsi_disk/6\:0\:0\:0/allow_restart, I see that it's already 1.

    Read the article

  • Struggles to connect to network when using WPA with a BCM43225

    - by pst007x
    When booting my laptop, it will try to connect to my wireless network, however a window keeps popping up asking me for my security password, which has already been saved. I have to keep deleting my network settings, and reconnecting, otherwise it keeps failing to connect. My wireless is set up with a WPA, I do not want to lower my security because of this, but it is a pain and can take me 15mins plus to finally connect. The problem has only become apparent since a fresh install of 11.10. IPV6 disabled. System info: 01:00.0 Ethernet controller: Broadcom Corporation NetLink BCM57780 Gigabit Ethernet PCIe (rev 01) Subsystem: Acer Incorporated [ALI] Device 036d Flags: bus master, fast devsel, latency 0, IRQ 43 Memory at b3400000 (64-bit, non-prefetchable) [size=64K] Capabilities: <access denied> Kernel driver in use: tg3 Kernel modules: tg3 02:00.0 Network controller: Broadcom Corporation BCM43225 802.11b/g/n (rev 01) Subsystem: Broadcom Corporation Device 04da Flags: bus master, fast devsel, latency 0, IRQ 17 Memory at b2400000 (64-bit, non-prefetchable) [size=16K] Capabilities: <access denied> Kernel driver in use: brcmsmac Kernel modules: wl, brcmsmac ADDITIONAL: In terminal I get this: pst007x@pst007x-ubuntu64:~$ nm-applet start ** Message: applet now removed from the notification area ** (nm-applet:2816): DEBUG: old state indicates that this was not a disconnect 0 ** Message: using fallback from indicator to GtkStatusIcon ** Message: applet now embedded in the notification area ** Message: No keyring secrets found for Auto Access 01/802-11-wireless-security; asking user. ** (nm-applet:2816): DEBUG: foo_client_state_changed_cb Note this line: ** Message: No keyring secrets found for Auto Access 01/802-11-wireless-security; asking user. At this point is where I am asked for the password. Please report WPA issues with Ubuntu 11.10 here: https://bugs.launchpad.net/ubuntu/+source/network-manager/+bug/892727

    Read the article

  • mdadm: brakes boot due to "is not ready yet or not present" error

    - by BarsMonster
    This is so damn frustrating :-| I've spent like 20 hours on this nice error, and seems like dozens of people over Internet too, and no clear solution yet. I have non-system RAID-5 of 5 disks, and it's fine. But during boot up it says that "/dev/md0 is not ready yet or not present" and asks to press 'S'. Very nice for Ubuntu Server - I have to bring monitor and keyboard to go next. After this system boots and it's all fine. md0 device works, /proc/mdstat is fine. When I do mount -a - it mounts this array without errors and works fine. As a dumb and shameful workaround I added noauto in /etc/fstab, and did mounting in /etc/rc.local - it works fine then. Any hints how to make it work properly? fstab: UUID=3588dfed-47ae-4c32-9855-2d69df713b86 /var/bigfatdisk ext4 noauto,noatime,data=writeback,barrier=0,nobh,commit=5 0 0 mdadm config: It is autogenerated: # mdadm.conf # # Please refer to mdadm.conf(5) for information about this file. # # by default, scan all partitions (/proc/partitions) for MD superblocks. # alternatively, specify devices to scan, using wildcards if desired. DEVICE partitions # auto-create devices with Debian standard permissions CREATE owner=root group=disk mode=0660 auto=yes # automatically tag new arrays as belonging to the local system HOMEHOST <system> # instruct the monitoring daemon where to send mail alerts MAILADDR CENSORED # definitions of existing MD arrays ARRAY /dev/md/0 metadata=1.2 bitmap=/var/md0_intent UUID=efccbeb6:a0a65cd6:470dcdf3:62781188 name=LBox2:0 # This file was auto-generated on Mon, 10 Jan 2011 04:06:55 +0200 # by mkconf 3.1.2-2

    Read the article

  • NVIDIA proprietary driver logging me to console instead of GUI

    - by Woozie
    Firstly i want to apologise for any mistakes, English is not my native language. My problem is I can't get NVIDIA proprietary drivers to work. I tried to install it on Ubuntu 12.04.1 32 and 64 bits, Ubuntu 12.10 Beta 2, Linux Mint 13 Cinnamon 64 bits and openSUSE 12.2 64 bits and the error code and symptoms (logging to tty1 instead of GUI logging, low-res bootscreen) are the same for all of these distros. Right, I didn't tell what's the error code. It appears on sudo startx. NVIDIA: could not open the device file /dev/nvidia0 (Input/output error). I know that's the common problem, but I tried to blacklist or even remove the noveau drivers, install NVIDIA driver from repo/from official script/in "Additional drivers", editing xorg.conf and using Xorg -configurate and nvidia-xconfig, actualizing the kernel and entire distro and many, many things that I don't remember. But the problem is even better: entire Cinnamon (Mint) is freezing during the work. I found the error code, which appears during the freeze: Oct 1 20:57:17 WoozieLaptop kernel: [ 308.120176] [drm] nouveau 0000:01:00.0: PFIFO_CACHE_ERROR - Ch 4/1 Mthd 0z0060 Data 0xbcef0201 My Xorg.0.log is here. It was made on Ubuntu 12.04.1 after installing NVIDIA drivers (obviously). inxi -G from Mint: Graphics: Card: NVIDIA GT216 [GeForce GT 240M] X.org: 1.11.3 drivers: (unloaded: nvidia) FAILED: nouveau,vesa,fbdev tty size: 80x25 Advanced Data: N/A for root out of X lspci -k | grep -A2 VGA from Mint: 01:00.0 VGA compatible controller: NVIDIA Corporation GT216 [GeForce GT 240M] (rev a2) Subsystem: Lenovo Device 38ff Kernel driver in use: nvidia My hardware is: Lenovo IdeaPad Y550 Intel C2D T6600 NVIDIA GeForce GT 240M 4 GB of RAM Any help will be appreciated. This problem totally disabled my laptop from daily using. Cheers, Woozie

    Read the article

  • XNA shield effect with a Primative sphere problem

    - by Sparky41
    I'm having issue with a shield effect i'm trying to develop. I want to do a shield effect that surrounds part of a model like this: http://i.imgur.com/jPvrf.png I currently got this: http://i.imgur.com/Jdin7.png (The red likes are a simple texture a black background with a red cross in it, for testing purposes: http://i.imgur.com/ODtzk.png where the smaller cross in the middle shows the contact point) This sphere is drawn via a primitive (DrawIndexedPrimitives) This is how i calculate the pieces of the sphere using a class i've called Sphere (this class is based off the code here: http://xbox.create.msdn.com/en-US/education/catalog/sample/primitives_3d) public class Sphere { // During the process of constructing a primitive model, vertex // and index data is stored on the CPU in these managed lists. List vertices = new List(); List indices = new List(); // Once all the geometry has been specified, the InitializePrimitive // method copies the vertex and index data into these buffers, which // store it on the GPU ready for efficient rendering. VertexBuffer vertexBuffer; IndexBuffer indexBuffer; BasicEffect basicEffect; public Vector3 position = Vector3.Zero; public Matrix RotationMatrix = Matrix.Identity; public Texture2D texture; /// <summary> /// Constructs a new sphere primitive, /// with the specified size and tessellation level. /// </summary> public Sphere(float diameter, int tessellation, Texture2D text, float up, float down, float portstar, float frontback) { texture = text; if (tessellation < 3) throw new ArgumentOutOfRangeException("tessellation"); int verticalSegments = tessellation; int horizontalSegments = tessellation * 2; float radius = diameter / 2; // Start with a single vertex at the bottom of the sphere. AddVertex(Vector3.Down * ((radius / up) + 1), Vector3.Down, Vector2.Zero);//bottom position5 // Create rings of vertices at progressively higher latitudes. for (int i = 0; i < verticalSegments - 1; i++) { float latitude = ((i + 1) * MathHelper.Pi / verticalSegments) - MathHelper.PiOver2; float dy = (float)Math.Sin(latitude / up);//(up)5 float dxz = (float)Math.Cos(latitude); // Create a single ring of vertices at this latitude. for (int j = 0; j < horizontalSegments; j++) { float longitude = j * MathHelper.TwoPi / horizontalSegments; float dx = (float)(Math.Cos(longitude) * dxz) / portstar;//port and starboard (right)2 float dz = (float)(Math.Sin(longitude) * dxz) * frontback;//front and back1.4 Vector3 normal = new Vector3(dx, dy, dz); AddVertex(normal * radius, normal, new Vector2(j, i)); } } // Finish with a single vertex at the top of the sphere. AddVertex(Vector3.Up * ((radius / down) + 1), Vector3.Up, Vector2.One);//top position5 // Create a fan connecting the bottom vertex to the bottom latitude ring. for (int i = 0; i < horizontalSegments; i++) { AddIndex(0); AddIndex(1 + (i + 1) % horizontalSegments); AddIndex(1 + i); } // Fill the sphere body with triangles joining each pair of latitude rings. for (int i = 0; i < verticalSegments - 2; i++) { for (int j = 0; j < horizontalSegments; j++) { int nextI = i + 1; int nextJ = (j + 1) % horizontalSegments; AddIndex(1 + i * horizontalSegments + j); AddIndex(1 + i * horizontalSegments + nextJ); AddIndex(1 + nextI * horizontalSegments + j); AddIndex(1 + i * horizontalSegments + nextJ); AddIndex(1 + nextI * horizontalSegments + nextJ); AddIndex(1 + nextI * horizontalSegments + j); } } // Create a fan connecting the top vertex to the top latitude ring. for (int i = 0; i < horizontalSegments; i++) { AddIndex(CurrentVertex - 1); AddIndex(CurrentVertex - 2 - (i + 1) % horizontalSegments); AddIndex(CurrentVertex - 2 - i); } //InitializePrimitive(graphicsDevice); } /// <summary> /// Adds a new vertex to the primitive model. This should only be called /// during the initialization process, before InitializePrimitive. /// </summary> protected void AddVertex(Vector3 position, Vector3 normal, Vector2 texturecoordinate) { vertices.Add(new VertexPositionNormal(position, normal, texturecoordinate)); } /// <summary> /// Adds a new index to the primitive model. This should only be called /// during the initialization process, before InitializePrimitive. /// </summary> protected void AddIndex(int index) { if (index > ushort.MaxValue) throw new ArgumentOutOfRangeException("index"); indices.Add((ushort)index); } /// <summary> /// Queries the index of the current vertex. This starts at /// zero, and increments every time AddVertex is called. /// </summary> protected int CurrentVertex { get { return vertices.Count; } } public void InitializePrimitive(GraphicsDevice graphicsDevice) { // Create a vertex declaration, describing the format of our vertex data. // Create a vertex buffer, and copy our vertex data into it. vertexBuffer = new VertexBuffer(graphicsDevice, typeof(VertexPositionNormal), vertices.Count, BufferUsage.None); vertexBuffer.SetData(vertices.ToArray()); // Create an index buffer, and copy our index data into it. indexBuffer = new IndexBuffer(graphicsDevice, typeof(ushort), indices.Count, BufferUsage.None); indexBuffer.SetData(indices.ToArray()); // Create a BasicEffect, which will be used to render the primitive. basicEffect = new BasicEffect(graphicsDevice); //basicEffect.EnableDefaultLighting(); } /// <summary> /// Draws the primitive model, using the specified effect. Unlike the other /// Draw overload where you just specify the world/view/projection matrices /// and color, this method does not set any renderstates, so you must make /// sure all states are set to sensible values before you call it. /// </summary> public void Draw(Effect effect) { GraphicsDevice graphicsDevice = effect.GraphicsDevice; // Set our vertex declaration, vertex buffer, and index buffer. graphicsDevice.SetVertexBuffer(vertexBuffer); graphicsDevice.Indices = indexBuffer; graphicsDevice.BlendState = BlendState.Additive; foreach (EffectPass effectPass in effect.CurrentTechnique.Passes) { effectPass.Apply(); int primitiveCount = indices.Count / 3; graphicsDevice.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, vertices.Count, 0, primitiveCount); } graphicsDevice.BlendState = BlendState.Opaque; } /// <summary> /// Draws the primitive model, using a BasicEffect shader with default /// lighting. Unlike the other Draw overload where you specify a custom /// effect, this method sets important renderstates to sensible values /// for 3D model rendering, so you do not need to set these states before /// you call it. /// </summary> public void Draw(Camera camera, Color color) { // Set BasicEffect parameters. basicEffect.World = GetWorld(); basicEffect.View = camera.view; basicEffect.Projection = camera.projection; basicEffect.DiffuseColor = color.ToVector3(); basicEffect.TextureEnabled = true; basicEffect.Texture = texture; GraphicsDevice device = basicEffect.GraphicsDevice; device.DepthStencilState = DepthStencilState.Default; if (color.A < 255) { // Set renderstates for alpha blended rendering. device.BlendState = BlendState.AlphaBlend; } else { // Set renderstates for opaque rendering. device.BlendState = BlendState.Opaque; } // Draw the model, using BasicEffect. Draw(basicEffect); } public virtual Matrix GetWorld() { return /*world */ Matrix.CreateScale(1f) * RotationMatrix * Matrix.CreateTranslation(position); } } public struct VertexPositionNormal : IVertexType { public Vector3 Position; public Vector3 Normal; public Vector2 TextureCoordinate; /// <summary> /// Constructor. /// </summary> public VertexPositionNormal(Vector3 position, Vector3 normal, Vector2 textCoor) { Position = position; Normal = normal; TextureCoordinate = textCoor; } /// <summary> /// A VertexDeclaration object, which contains information about the vertex /// elements contained within this struct. /// </summary> public static readonly VertexDeclaration VertexDeclaration = new VertexDeclaration ( new VertexElement(0, VertexElementFormat.Vector3, VertexElementUsage.Position, 0), new VertexElement(12, VertexElementFormat.Vector3, VertexElementUsage.Normal, 0), new VertexElement(24, VertexElementFormat.Vector2, VertexElementUsage.TextureCoordinate, 0) ); VertexDeclaration IVertexType.VertexDeclaration { get { return VertexPositionNormal.VertexDeclaration; } } } A simple call to the class to initialise it. The Draw method is called in the master draw method in the Gamecomponent. My current thoughts on this are: The direction of the weapon hitting the ship is used to get the middle position for the texture Wrap a texture around the drawn sphere based on this point of contact Problem is i'm not sure how to do this. Can anyone help or if you have a better idea please tell me i'm open for opinion? :-) Thanks.

    Read the article

  • Why doesn't my cube hold a position?

    - by Christian Frantz
    I gave up a previous method of creating cubes so I went with a list to hold my cube objects. The list is being populated from an array like so: #region MAP float[,] map = { {0, 0, 0, 0, 0}, {0, 0, 0, 0, 0}, {0, 0, 0, 0, 0}, {0, 0, 0, 0, 0}, {0, 0, 0, 0, 0} }; #endregion MAP for (int x = 0; x < mapWidth; x++) { for (int z = 0; z < mapHeight; z++) { cubes.Add(new Cube(device, new Vector3(x, map[x,z], z), Color.Green)); } } The cube follows all the parameters of what I had before. This is just easier to deal with. But when I debug, every cube has a position of (0, 0, 0) and there's just one black cube in the middle of my screen. What could I be doing wrong here? public Vector3 cubePosition { get; set; } public Cube(GraphicsDevice graphicsDevice, Vector3 Position, Color color) { device = graphicsDevice; color = Color.Green; Position = cubePosition; SetUpIndices(); SetUpVerticesArray(); } That's the cube constructor. All variables are being passed correctly I think

    Read the article

  • Why is wireless slow with Atheros AR9285?

    - by Luke
    I know there are many posts like this, however none of the fixes I have found have worked. I had the issue on 11.04, and after having no luck fixing it decided to try 12.04 however this has not fixed the problem. I'm using a Lenovo IdeaPad, the network card is a Atheros Communications AR9285. edit add outputs: sudo iwconfig lo no wireless extensions. wlan0 IEEE 802.11bgn ESSID:"NETGEAR-PLOW" Mode:Managed Frequency:2.437 GHz Access Point: E0:91:F5:7D:1B:BA Bit Rate=65 Mb/s Tx-Power=15 dBm Retry long limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Management:on Link Quality=66/70 Signal level=-44 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:77 Invalid misc:63 Missed beacon:0 eth0 no wireless extensions. lspci -nnk | grep -iA2 net 06:00.0 Network controller [0280]: Atheros Communications Inc. AR9285 Wireless Network Adapter (PCI-Express) [168c:002b] (rev 01) Subsystem: Lenovo Device [17aa:30a1] Kernel driver in use: ath9k -- 07:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller [10ec:8136] (rev 02) Subsystem: Lenovo Device [17aa:392e] Kernel driver in use: r8169 Thanks

    Read the article

  • iptables unresolved dependencies

    - by tertle
    I'm trying to setup OpenVPN Access Server on a VPS running ubuntu 9.10 for a friend so she can play games from her uni campus. The problem is I keep running into this error when trying to start openvpn. Service deferred error: IPTablesServiceBase: failed to run iptables-restore [status=1]: ['FATAL: Could not load /lib/modules/2.6.18-028stab070.14/modules.dep: No such file or directory', 'FATAL: Could not load /lib/modules/2.6.18-028stab070.14/modules.dep: No such file or directory', 'iptables-restore: line 46 failed']: internet/base:1175,internet/base:752,internet/process:45,internet/process:306,internet/_baseprocess:48,internet/process:775,internet/_baseprocess:60,svc/pp:116,svc/svcnotify:26,internet/defer:238,internet/defer:307,internet/defer:323,sagent/ipts:105,sagent/ipts:39,util/error:52,util/error:32 service failed to start due to unresolved dependencies: set(['user', 'iptables_openvpn']) service failed to start due to unresolved dependencies: set(['user', 'iptables_openvpn']) service failed to start due to unresolved dependencies: set(['iptables_openvpn']) Now I've already got my provider to enabled the TUN/TAP device driver and I checked this using # cat /dev/net/tun Which returned “File descriptor in bad state” Which I believe means it's enabled. After extensive searching, I've been unable to find any solution other than people suggesting to make sure TUN/TAP device driver is enabled. Any ideas on how to solve my issue? I'm not very experience with linux and I feel in over my head here so any advice is greatly appreciated. --edit-- Just stumbled across this Not sure how I missed it earlier. I believe I need to get modprobe ipt_mark & modprobe ipt_MARK run on the hostnode by my provider. Is this correct and something I should try get done.

    Read the article

  • Ubuntu openGL issues

    - by Dank101
    my OpenGL doesn't work at all i get Xlib: extension "GLX" missing on display ":0". lspci output 00:02.0 VGA compatible controller [0300]: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller [8086:0126] (rev 09) (prog-if 00 [VGA controller]) 01:00.0 VGA compatible controller [0300]: NVIDIA Corporation Device [10de:1246] (rev a1) (prog-if 00 [VGA controller]) dmesg | grep -i nvid [ 9.469068] nvidia: module license 'NVIDIA' taints kernel. [ 9.538786] nvidia 0000:01:00.0: power state changed by ACPI to D0 [ 9.538792] nvidia 0000:01:00.0: power state changed by ACPI to D0 [ 9.538796] nvidia 0000:01:00.0: enabling device (0006 -> 0007) [ 9.538803] nvidia 0000:01:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 9.538809] nvidia 0000:01:00.0: setting latency timer to 64 [ 9.538942] NVRM: loading NVIDIA UNIX x86_64 Kernel Module 304.48 Sun Sep 9 [10300.955799] nvidia 0000:01:00.0: restoring config space at offset 0xf (was 0x100, writing 0x10b) [10300.955803] nvidia 0000:01:00.0: restoring config space at offset 0xc (was 0x0, writing 0xfff80000) [10300.955807] nvidia 0000:01:00.0: restoring config space at offset 0x9 (was 0x1, writing 0x4001) [10300.955811] nvidia 0000:01:00.0: restoring config space at offset 0x7 (was 0xc, writing 0xd000000c) [10300.955814] nvidia 0000:01:00.0: restoring config space at offset 0x5 (was 0xc, writing 0xc000000c) [10300.955817] nvidia 0000:01:00.0: restoring config space at offset 0x4 (was 0x0, writing 0xf0000000) [10300.955820] nvidia 0000:01:00.0: restoring config space at offset 0x3 (was 0x800000, writing 0x10) [10300.955823] nvidia 0000:01:00.0: restoring config space at offset 0x1 (was 0x100006, writing 0x100007) my computer is a dell XPS l702x

    Read the article

  • Can't use nvidia card/driver on optimus notebook

    - by Mr. Pixel
    I installed (once again) the latest official nvidia driver for my GT540m on Ubuntu 11.10. Even though everything seems OK with my xorg.conf file (I've manually added BusID "PCI:1:0:0", since lspci shows 01:00.0 for my GPU). The problem is, when I use the xorg.conf file generated by Xorg -configure, Xorg automatically loads the Intel GPU. So I removed everything that was not related to my nvidia card, basically leaving my xorg.conf with one screen and one device (with the nvidia driver and the above-mentioned BusID), and Xorg fails to start. The log says something like "Devices on GT540m [newline] none" And a few lines later, something like "NVIDIA(0) found a screen, but have no device for it". When I don't set the BusID, it doesn't seem to detect my card either. Thank you for any suggestion. PS: If possible, I'd like to avoid bumblebee or any similar "hybrid graphics" solution, last time I tried I ended up reinstalling Ubuntu. Edit: Allow me to clarify the problem. I have a notebook with a GT540m graphics card, and an integrated intel gpu. I want to use the graphics card with full hardware acceleration and its official driver, as I do under windows.

    Read the article

  • Multiple monitors showing same screen but different resolutions

    - by Luis Alvarado
    Is it possible to have 2 or more monitors showing the same screen, for example the same desktop but with different resolutions. Like the clone option in Nvidia or the mirror option using the Display settings in Ubuntu but instead of showing the same output with the same resolution, the both show the same output using a resolution that is native for each monitor connected. In my case if I have a netbook that has max resolution of 1360x768 and a TV that has 1280x1024, the would both show the same desktop but each with their own resolution that is compatible for each device. This would help in trying to find a resolution that works on both monitors and in cases like a mini netbook and a huge TV it would solve issues like having max 800x600 in one monitor and min 1024x768 in the other. In the case I tested I was using an HDMI cable but this question also involves VGA and any other connection. I have 3 tests scenarios for this: Scenario 1 - Laptop HP DV6000 (Intel Integrated Video) with 1360x760 connected to a Samsung LED 42 TV that has 1280x900. Scenario 2 - Laptop EEE with 1024x600 (Intel Integrated Video) connected to Sony LCD TV that supports 1280x900. Scenario 3 - Intel Desktop with Nvidia 440 GT with HDMI connected to Soneview 32' TV that supports 1920x1080 and VGA connected to an Epson Video Beam that supports 1280x1024 max. In this 3 scenarios I need to be able to show the same desktop and same views but on different resolutions for each output device. UPDATE: Tested with Xubuntu and the way it handles multiple monitors is precisely what I am asking. The ability to handle the resolution of different monitors showing the same thing.

    Read the article

  • Introducing the New Boot Framework in CE 7

    - by Kate Moss' Open Space
    CE 7 introduces a new boot loader framework, BLDR (platform\common\src\common\bldr\). Some people like its powerful and flexbility, others may feel its too complicate as a boot loader framework. Despite to the favor, it is already there; so let's take a look at its features. Unlike the previous BL framwork (CE7 still provides it in platform\common\src\common\boot\) is a monolithic library, the new framework has more architecture structure. It not only defines main body but also provides rich components, such as filesystem (BinFS/FAT), download transportations, display, logging and block devices: bios INT13, FAL, IDE, Flash ( and etc. Note that in the block device category, the FAL is for legacy FMD/FAL, Flash is for latest MSFlash. Some of you may have encountered MSFlash MDD/PDD compatible partition is hard to created in bootloader and now it provides a clean solution! (Since this is a big topic, I will introduce it in future post) Today, I am going to show you some basic helper components - Image Loading functions. When OS image stored in the block device, it can be a file format, says your NK.BIN in the FAT volume or a RAW format, says the image is programmed to a BINFS partition. For the first one you can use BootFileSystemReadBinFile (platform\common\src\common\bldr\fileSystem\utils\fileSystemReadBinFile.c) and use BootBlockLoadBinFsImage (platform\common\src\common\bldr\block\utils\loadBinFs.c) to load from a partition. Need a sample code? No problem, the BootLoaderLoadOs in platform\cepc\src\boot\bldr\loados.c just provide a perfect example.

    Read the article

  • Advancing my Embedded knowledge.....with a CS degree.

    - by Mercfh
    So I graduated last December with a B.S. in Computer Science, in a pretty good well known engineering college. However towards the end I realized that I actually like Assembly/Lower level C programming more than I actually enjoy higher level abstracted OO stuff. (Like I Programmed my own Device Drivers for USB stuff in Linux, stuff like that) But.....I mean we really didn't concentrate much on that in college, perhaps an EE/CE degree would've been better, but I knew the classes......and things weren't THAT much different. I've messed around with Atmel AVR's/Arduino stuff (Mostly robotics) and Linux Kernals/Device Drivers. but I really want to enhance my skills and maybe one day get a job doing embedded stuff. (I have a job now, it's An entry level software dev/tester job, it's a good job but not exactly what my passion lies in) (Im pretty good with C and certain ASM's for specific microcontrollers) Is this even possible with a CS degree? or am I screwed? (since technically my degree usually doesn't involve much embedded stuff) If Im NOT screwed then what should I be studying/learning? How would I even go about it........ I guess I could eventually say "Experienced with XXXX Microcontrollers/ASM/etc...." but still, it wouldn't be the same as having a CE/EE degree. Also....going back to college isn't an option. just fyi. edit: Any book recommendations for "getting used to this stuff" I have ARM System-on-Chip Architecture (2nd edition) it's good.....for ARM stuff lol

    Read the article

  • Which language meets my needs? [closed]

    - by Gerald Goward
    I am a junior C# developer, working for half a year now. In my company I am working on some enterprise projects and after doing it for quite some time I understood that I dont like enterprise projects. I have my own browser-game written in PHP+MySql with some simple HTML+CSS and I have 300 active (those, who entered the game at least once per 5 days) players currently :) After thinking quite some time I understood that I am interested in: 1). Web-development AND 2). standalone programs (but not enterprise ones). 3). Development for mobile platforms is also nice, Android/iOs. 1st and 2nd categories are what I want the most. Android/iOs is good too. I am NOT interested in big systems which are hard to integrate, I am not interested in enterprise systems. In future I would like to start my own business/projects. I would like to create my own projects or/and create a small programmers company to create and release own products. Please tell me what programming language(s)/technologies would you advice me for it? Thanks alot! UPD: It's NOT a "which language is better" or any flame/holywar generating topic since I ask for language that suits my EXACT needs better. I believe C++ is better for low-level coding, while PHP is good for web-development and Object-C being made for iOs. I am still newbie at programming so dont hate me please.

    Read the article

  • Why are my videos playing speeded up with no audio, but work fine if I log in as a guest?

    - by Martins Kruze
    Since the start of this week I have been experiencing a glitch in the multimedia on my Samsung R518 laptop. I have 2 problems: Videos in every player are speeded up around 2 or 4 times (including youtube.com (both HTML5 and flash variants), any other video on the web and videos on my laptop played by Totem Media Player), exception is VLC player, but 2nd problem does concern even that. There is no sound - simple as that (with or without headphones plugged in). These all problems are now, and has not seen before, I upgraded to Ubuntu 10.10 after it was possible, and from start I didn't have anything from this - it just started in this week. I haven't even putted new software in. I have more or less solved the question (kind of) - I just logged in as a guest - and it all works, but when I make a new user - it does not. Please help me. Some stats below: sudo lshw -c sound *-multimedia description: Audio device product: RV710/730 vendor: ATI Technologies Inc physical id: 0.1 bus info: pci@0000:01:00.1 version: 00 width: 32 bits clock: 33MHz capabilities: pm pciexpress msi bus_master cap_list configuration: driver=HDA Intel latency=0 resources: irq:48 memory:cfeec000-cfeeffff *-multimedia description: Audio device product: 82801I (ICH9 Family) HD Audio Controller vendor: Intel Corporation physical id: 1b bus info: pci@0000:00:1b.0 version: 03 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: driver=HDA Intel latency=0 resources: irq:47 memory:fc200000-fc203fff sudo lshw -c video *-display description: VGA compatible controller product: M92 LP [Mobility Radeon HD 4300 Series] vendor: ATI Technologies Inc physical id: 0 bus info: pci@0000:01:00.0 version: 00 width: 32 bits clock: 33MHz capabilities: pm pciexpress msi vga_controller bus_master cap_list rom configuration: driver=radeon latency=0 resources: irq:46 memory:d0000000-dfffffff ioport:2000(size=256) memory:cfef0000-cfefffff memory:cfe00000-cfe1ffff

    Read the article

  • Java ME SDK 3.0.5 Integrated with NetBeans 7.1.1

    - by SungmoonCho
    NetBeans 7.1.1 now integrates Java ME SDK 3.0.5, so you do not have to download them separately. Java ME SDK was packaged in NetBeans Mobility Pack, a mobile application development toolkit for NetBeans. Therefore, Java ME SDK is no longer a separate menu on NetBeans. For those who have not downloaded Java ME SDK yet, please simply visit NetBeans website and download the latest version. For those who already have Java ME SDK integrated with NetBeans 7.1 or earlier, and want to update NetBeans IDE to 7.1.1, don't worry. They can co-exist. To use NetBeans plug-ins such as Device Selector, profiler, and Internationalization Resource Manager, you have to install "Java ME SDK Tools" from NetBeans. Here is how. 1.  Go to "Tools - Plug-ins" from NetBeans menu. You can find all the plug-ins you can install into NetBeans. Locate "Java ME SDK Tools" from the list. 2. Follow the instruction to install Java ME SDK Plug-ins. 3. Once completed, you will see new menu options. For example, you can find Device Selector under Tools - Java ME. (If you used old version of Java ME, you will notice that there is not 'Java ME' menu any more. This is because all the sub-menus were integrated into appropriate places in NetBeans.) There is one thing to keep in mind; Since NetBeans 7.1.1 already includes Java ME SDK 3.0.5 and Java ME SDK 3.0.5 plug-ins must be installed through NetBeans plug-in menu, you should not download Java ME SDK 3.0.5 separately and try to integrate it with NetBeans. This may cause issues.

    Read the article

  • Can't connect to hidden network with BCM4313

    - by poomerang
    The wireless works fine with all the other wi-fi nets I have tried, the only problem is with this hidden network. I should add it's the only hidden network I've tried, so I am not sure if the problem is it being hidden or somethings else, but I've checked the settings of NetworkManager against another Ubuntu system (which can connect) and they appear to be the same, passphrase included. The network is using WPA2 Personal with AES encryption, I don't know how to check this setting but I believe it's the usual for WPA2, and therefore usually not a problem. Also, I can connect through ethernet, which should exclude any blacklisting of my device, I believe. I usually use brcmsmac drivers, I've tried also STA but the result is the same. I've also tried the suggestion from Unable to connect to hidden SSID with no luck output of lspci -v is 03:00.0 Network controller: Broadcom Corporation BCM4313 802.11b/g/n Wireless LAN Controller (rev 01) Subsystem: Askey Computer Corp. Device 7175 Flags: bus master, fast devsel, latency 0, IRQ 17 Memory at d4000000 (64-bit, non-prefetchable) [size=16K] Capabilities: <access denied> Kernel driver in use: brcmsmac Kernel modules: bcma, brcmsmac

    Read the article

  • Dirt Cheap Bi-Directional Antenna Wirelessly Extends Your LAN

    - by Jason Fitzpatrick
    If you’re looking for an effective way to link remote LANs without the hassle of laying cable, this DIY bi-directional antenna is a quick (and cheap) method for bringing internet access to outbuildings and other locations. Tinker Danilo Larizza needed to share internet access between apartments that are relatively close together but not hardwired–ruling out simply sharing the access via existing LAN infrastructure. His solution combines a simple scrap wire antenna array mounted inside a plastic food bin (seen here with the cover removed to show the antenna) and some coaxial cable to link the antenna to two routers. Our favorite part about his build is that he constructed the pair to establish if the antenna setup would even work in his location and intended to buy commercial antennas if it did; his Tupperware models worked so well, however, they’re now the permanent solution. Hit up the link below for more information about the project. 2.4 Ghz Directive Biquad Antenna [via Hack A Day] How To Use USB Drives With the Nexus 7 and Other Android Devices Why Does 64-Bit Windows Need a Separate “Program Files (x86)” Folder? Why Your Android Phone Isn’t Getting Operating System Updates and What You Can Do About It

    Read the article

  • Can't access external hard drives or thumb drives

    - by calden
    I am not a complete linux noob but I don't know a lot either and would greatly appreciate some help with this. I just installed Ubuntu 10.10 onto my laptop. Everything is working great however USB devices such as thumb drives and external hard drives wont show up. I have been looking around a bit and when I run sudo fdisk -l it displays this: Disk /dev/sda: 250.1 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00065684 Device Boot Start End Blocks Id System /dev/sda1 * 1 29255 234983424 83 Linux /dev/sda2 29255 30402 9212929 5 Extended /dev/sda5 29255 30402 9212928 82 Linux swap / Solaris Disk /dev/sdb: 16.0 GB, 16026435072 bytes 64 heads, 32 sectors/track, 15283 cylinders Units = cylinders of 2048 * 512 = 1048576 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000df90d Device Boot Start End Blocks Id System /dev/sdb1 * 1 15283 15649776 7 HPFS/NTFS It does seem to display my 16 gig thumb drive but other then seeing it here I cant access it to read and write files to it. It does the same with my external hard drive. I know those devices work as I have tried them on my other computer and they are working fine. Also this is what is in fstab if this will help anybody help me: proc /proc proc nodev,noexec,nosuid 0 0 /dev/sdb1 / ext4 errors=remount-ro 0 1 /dev/sdb5 none swap sw 0 0 Thank you very much for the help everyone.

    Read the article

< Previous Page | 536 537 538 539 540 541 542 543 544 545 546 547  | Next Page >