Search Results

Search found 7706 results on 309 pages for 'checked'.

Page 148/309 | < Previous Page | 144 145 146 147 148 149 150 151 152 153 154 155  | Next Page >

  • OpenGL ES Loading

    - by kuroutadori
    I want to know what is the norm of loading rendering code. Take a button. When the application is loaded, a texture is loaded which has the image of the button on it. When the button is tapped, it then adds a loader into a queue, which is loaded on render thread. It then loads up an array buffer with vertexes and tex coords when render is called. It then adds to a render tree. Then it renders. the render function looks like this void render() { update(); mBaseRenderer->render(); } update() is when the queue is checked to see if anything needs loading. mBaseRenderer->render() is the render tree. What I am asking then is, should I even have the update() there at all and instead have everything preloaded before it renders? If I can have it loaded when need, for instance when there is tap, then how can it be done (My current code causes an dequeueing buffer error (Unknown error: -75) which I assume is to do with OpenGL ES and the context)?

    Read the article

  • Scene graphs and spatial partitioning structures: What do you really need?

    - by tapirath
    I've been fiddling with 2D games for awhile and I'm trying to go into 3D game development. I thought I should get my basics right first. From what I read scene graphs hold your game objects/entities and their relation to each other like 'a tire' would be the child of 'a vehicle'. It's mainly used for frustum/occlusion culling and minimizing the collision checks between the objects. Spatial partitioning structures on the other hand are used to divide a big game object (like the map) to smaller parts so that you can gain performance by only drawing the relevant polygons and again minimizing the collision checks to those polygons only. Also a spatial partitioning data structure can be used as a node in a scene graph. But... I've been reading about both subjects and I've seen a lot of "scene graphs are useless" and "BSP performance gain is irrelevant with modern hardware" kind of articles. Also some of the game engines I've checked like gameplay3d and jmonkeyengine are only using a scene graph (That also may be because they don't want to limit the developers). Whereas games like Quake and Half-Life only use spatial partitioning. I'm aware that the usage of these structures very much depend on the type of the game you're developing so for the sake of clarity let's assume the game is a FPS like Counter-Strike with some better outdoor environment capabilities (like a terrain). The obvious question is which one is needed and why (considering the modern hardware capabilities). Thank you.

    Read the article

  • Problems with wired ethernet connection Ubuntu 11.10

    - by Andrew Fielden
    After some partition shuffling, I've got a problem on my 11.10 system. The wired ethernet interface fails to come up, although the wireless interface is working. I'm using NetworkManager. I thought this may be a problem with NetworkManager, so I checked the config files, which look ok. I then tried re-installing the package, but this didn't resolve the issues. I'm not sure at this point if the problem is due to software configuration, or a hardware problem. I've also tried the cable in other router ports, but same problem. The symptoms are:- System settings - Network reports that the cable is unplugged (it isn't) ifconfig reports the following eth0 Link encap:Ethernet HWaddr f0:4d:a2:a2:a7:fe UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:10 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:792 (792.0 B) TX bytes:0 (0.0 B) Interrupt:46 Base address:0xe000 My /etc/network/interfaces file has the following: auto lo iface lo inet loopback My /etc/resolv.conf file has the following: # Generated by NetworkManager The router's red light is on for this port dmesg reports ADDRCONF(NETDEV_UP): eth0: link is not ready

    Read the article

  • Update to SQL Server Configuration Scripting Utility

    - by Bill Graziano
    Last spring I released a utility to script SQL Server configuration information on CodePlex.  I’ve been making small changes in this application as my needs have changed.  The application is a .NET 2.0 console application.  This utility serves two needs for me.  First it helps with disaster recovery.  All server level objects (logins, jobs, linked servers, audits) are scripted to a single file per object type.  This enables the scripts to be easily run against a DR server.  If these are checked into source control you can view the history of the script and find out what changed and when. The second goal is to capture what changed inside a database.  Objects inside a database (tables, stored procedures, views, etc.) are each scripted to their own file.  This makes it easier to track the changes to an object over time.  This does include permissions and role membership so you can capture security changes.  My assumption is that a database backup is the primary method of disaster recovery for databases so this utility is designed to capture changes to objects.  You can find the full list of changes from the original on the Downloads page on CodePlex.

    Read the article

  • Visual Studio 2010 Modeling and Architecture Tools

    - by MikeParks
    Jennifer Marsman (Microsoft Evangelist) and Cameron Skinner (Microsoft Visual Studio Product Unit Manager) recently stopped by our office while they were passing through Louisville on their tour to give us a presentation on the new Visual Studio 2010 Modeling and Architecture Tools. I checked out these new features when Visual Studio 2010 Beta versions originally rolled out and have been really impressed with this stuff ever since then. So it was pretty cool to actually learn some new techniques from Cameron himself since he helped write the actual code behind some of those features. If you've upgraded to Visual Studio 2010 recently I would highly recommend using the Architecture tools. They're awesome. If you want to make improvements to it, they even have their own SDK for it. There are plenty of blogs out there to show you how to use it. I've been waiting to find a tool that works like this where I can really analyze the code in solutions and projects and see how everything ties together. It's really handy if you're asked to work on a new project and aren't familiar with how it works. Just run the tools, analyze the DLL's, learn how everything works, and then you'll be ready to implement new code! It's a great tool to learn new systems quick and easy and it's all housed within the Visual Studio IDE. I just wanted to write a blog to brag about it a little bit, so I figured I'd throw this up here. It's a must have tool for Developers/Architects. Here's some screenshots of when I was using it earlier:   Thanks everyone! - Mike

    Read the article

  • Yoga Pro 2 Wi-Fi not working

    - by user293004
    I installed Ubuntu 14.04 on my new Yoga Pro 2 and the wireless is not working. It started with Windows 8 on it. The Network Manager says Wi-Fi is disabled by hardware switch. I tried putting a blacklist file in ect/modprobe.d as has been suggested in many places. I called the file "blacklist-ideapad_laptop.conf" and wrote in the file blacklist ideapad_laptop I checked to make sure that the wireless is enabled in the BIOS. It is. I ran rfkill list all and it displayed: 0: hci0: Bluetooth Soft blocked: no Hard blocked: no 2: phy0: Wireless LAN Soft blocked: no Hard blocked: yes I ran iwlist wlan0 scan and it displayed: wlan0 Failed to read scan data : Network is down I ran sudo rmmod ideapad_laptop and it displayed: rmmod: ERROR: Module ideapad_laptop is not currently loaded. I ran ifconfig wlp1s0 up and it displayed: wlp1s0: ERROR while getting interface flags: No such device. I ran "lspci" and it displayed: 01:00.0 Network controller: Intel Corporation Wireless 7260 (rev 6b) I ran sudo lshw -c network and it displayed: *-network DISABLED description: Wireless interface product: Wireless 7260 vendor: Intel Corporation physical id: 0<br> bus info: pci@0000:01:00:0.0 logical name: wlan0 version: 6b serial: 7c:7a:91:5f:9b:fa width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=iwlwifi driverversion=3.13.0-24-generic firmware=22.24.8.0 latency=0 link=no multicast=yes wireless=IEEE 802.11bgn resources: irq:61 memory:b0400000-b0401fff This No wireless with Intel Centrino Advanced-N 7260 seems to be dealing with a similar issue. It suggests that I need to update my firmware. So I downloaded iwlwifi-7260-ucode-23.214.9.0 from Intel's website. I put the file "iwlwifi-7260-9.ucode" in /lib/firmware and ran "sudo lshw -c network" again. It displayed exactly as before. Is there something else I need to do install the new firmware?

    Read the article

  • Removed Java replaced with newest "Sun Java", disc won't boot, and won't let me re-install grub using boot repair disc

    - by Al Rowe
    Had a minor problem with my Stock market platform. Set-up screen would freeze program. Called their tech support, got their "Linux guy", who advised remove all Java and replace, not with synaptic version, but newest Sun Java. After removing, computer auto rebooted, and went to blue mem-test screen. Showed no errors, but couldn't get back in. Tried two versions of boot repair disc from iso (checked md5sum, showed good.), but fix aborted, giving apt-error detected. Opened a terminal and typed (or copy/paste): sudo chroot "/mnt/boot-sav/sda1" apt-get -f install. My system is Ubuntu 12.04. Had a few very minor issues from install, all fixed. Also added some of my favorite gnome tricks just to make life easier, but none that could have caused this. Added script to add shortcuts to desktop, open terminal in any menu from inside it, access root terminal, etc. System was firewalled and using avast antivirus (o.k., I'm paranoid. Used to do Windows sys-op and security.) But relative newbie to Linux.

    Read the article

  • Weird rotation problem

    - by Phil
    I'm creating a simple tank game. No matter what I do, the turret keeps facing the target with it's side. I just can't figure out how to turn it 90 degrees in Y once so it faces it correctly. I've checked the pivot in Maya and it doesn't matter how I change it. This is the code I use to calculate how to face the target: void LookAt() { var forwardA = transform.forward; var forwardB = (toLookAt.transform.position - transform.position); var angleA = Mathf.Atan2(forwardA.x, forwardA.z) * Mathf.Rad2Deg; var angleB = Mathf.Atan2(forwardB.x, forwardB.z) * Mathf.Rad2Deg; var angleDiff = Mathf.DeltaAngle(angleA, angleB); //print(angleDiff.ToString()); if (angleDiff > 20) { //Rotate to transform.Rotate(new Vector3(0, (-turretSpeed * Time.deltaTime),0)); //transform.rotation = new Quaternion(transform.rotation.x, transform.rotation.y + adjustment, transform.rotation.z, transform.rotation.w); } else if (angleDiff < 20) { transform.Rotate(new Vector3(0, (turretSpeed * Time.deltaTime),0)); //transform.rotation = new Quaternion(transform.rotation.x, transform.rotation.y + adjustment, transform.rotation.z, transform.rotation.w); } else { } } I'm using Unity3d and would appreciate any help I can get! Thanks!

    Read the article

  • Web browser downloads only open target folders - cannot open files

    - by Pavlos G.
    After installing xubuntu packages in order to check xfce, I reverted back to gnome2. During the first login, I noticed that thunar was now selected as the default file manager. Preferred applications menu is also missing now, so I could not set nautilus as the default. I removed all the xubuntu packages (including thunar) and then when I tried to open a folder, I was asked to select the default file manager - that's how I got nautilus back. The next problem I'm now facing has to do with the downloaded files from web browsers: Open and Open containing folder options produce exactly the same result. If I double-click on a file, it'll just open the containing folder, instead of opening the file with it's associated application (e.g. libreoffice writer for .doc,.odt, smplayer for .avi,.wmv, etc). The problem happens both in Firefox and Chrome. Through nautilus, all files open correctly. Up until now I've tried the following: Delete/recreate mimeTypes.rdf in my FF profile Create a new profile in FF Delete/recreate ~/.local/share/applications/mimeapps.list Already checked this similar article None of them worked. Any ideas on the issue would be appreciated.

    Read the article

  • Can't connect to hidden network with BCM4313

    - by poomerang
    The wireless works fine with all the other wi-fi nets I have tried, the only problem is with this hidden network. I should add it's the only hidden network I've tried, so I am not sure if the problem is it being hidden or somethings else, but I've checked the settings of NetworkManager against another Ubuntu system (which can connect) and they appear to be the same, passphrase included. The network is using WPA2 Personal with AES encryption, I don't know how to check this setting but I believe it's the usual for WPA2, and therefore usually not a problem. Also, I can connect through ethernet, which should exclude any blacklisting of my device, I believe. I usually use brcmsmac drivers, I've tried also STA but the result is the same. I've also tried the suggestion from Unable to connect to hidden SSID with no luck output of lspci -v is 03:00.0 Network controller: Broadcom Corporation BCM4313 802.11b/g/n Wireless LAN Controller (rev 01) Subsystem: Askey Computer Corp. Device 7175 Flags: bus master, fast devsel, latency 0, IRQ 17 Memory at d4000000 (64-bit, non-prefetchable) [size=16K] Capabilities: <access denied> Kernel driver in use: brcmsmac Kernel modules: bcma, brcmsmac

    Read the article

  • Trying to install Canon LBP7750Cdn driver on Ubuntu 12.04

    - by Gideon
    I'n new to Ubuntu/Linux and had significant difficulties while attempting to configure my printer to work. The automatic driver pairing wizard which Ubuntu uses to identify and install the appropriate drivers did not find my printer's driver. I managed to get it to print when I manually select the generic configuration and checked the PCL6 configuration. However, the printer driver wizard does provide a list of Canon printers and actually do specify my printer as LBP7750C (minus the "dn" at the end, I'm assuming its because duplex ability and networking is not present on all the models - I'm not sure if this could be the source of the problem), but in selecting this option and trying to print I receive this message: Idle - /usr/lib/cups/filter/foomatic-rip failed I searched for this similar problem which other users might have encountered, but while there where plenty of such cases, they all had different resolutions and were all related to HP printers. Canon actually do provide a driver for my printer, but it comes with no installation instructions unless you consider yourself an experienced CUPS guru. Seriously. If anyone can help me solve this foomatic-rip failed problem I'd be really grateful - and I'm sure many other folks too. [BTW, can't Canonical fix this type of thing for the next Ubuntu release? - I't seems like a small problem but it causes many problems and countles hours of production time loss.] Thanks in advance.

    Read the article

  • How do I get Graphics drivers / bluetooth / card reader working on an Acer Aspire V3-571G?

    - by Adam
    A couple of days ago I bought an Acer Aspire V3-571G laptop without a system installed on it. The only thing that was there was Linux Linpus. I created a bootable CD with Ubuntu 12.04 64-bit - I read that my processor was 64 bit and that it might be a good configuration for my gear (I'm not especially fluent with all the computer stuff, still trying to learn) and replaced Linpus with Ubuntu. Everything seemed to work fine, but there're few exceptions to that which came pass my way. My bluetooth doesn't work. It seems to be switched on, but when I check my system settings the button is actually off, and I can't drag it 'perminently' to the 'on' position. Tried a couple of commands I found on the net, none of them helped and there was no word whatsoever in my BIOS settings about enabling bluetooth. My card reader has some serious problems with copying more than one file at a time. I tried to put some music on my phone through a MicroSD card adapter (because my bluetooth doesn't work) and it got stuck every single time I copied an album on it. I'm not sure if all my drivers were properly installed, so I checked in the terminal if it could tell me sth about my graphics. typed: sudo lshw -c display and what i got was: *-display UNCLAIMED description: VGA compatible controller product: NVIDIA Corporation vendor: NVIDIA Corporation physical id: 0 bus info: pci@0000:01:00.0 version: a1 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress vga_controller cap_list configuration: latency=0 resources: memory:b2000000-b2ffffff memory:a0000000-afffffff memory:b0000000-b1ffffff ioport:2000(size=128) *-display description: VGA compatible controller product: Ivy Bridge Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 09 width: 64 bits clock: 33MHz capabilities: msi pm vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:44 memory:b3000000-b33fffff memory:c0000000-cfffffff ioport:3000(size=64) As I said I'm no expert and not english-speaking generally, but it doesn't seem to be right. I've got a NVIDIA GeForce GT 640M.

    Read the article

  • How can I tell GoogleBot that a subdirectory is now a subdomain?

    - by cwd
    I had about a million pages of a catalog indexed under a subdirectory, and now that's moved to a subdomain. GoogleBot is crawling each one of them and getting a 301 redirect to the new location. Even though I have set up the redirect rule in the apache sites-enabled configuration file, (i.e. it's early on when apache does the redirect - PHP is not even getting loaded), even though I have done that, the server isn't handling the load well. GoogleBot is making around 5 requests per second, and on top of my normal traffic that is hiking up the CPU for a few hours at a time. I checked in Webmaster Tools and the corresponding documentation for a way to let Google know that the content had been moved from a subdirectory to a subdomain, but with little luck. Basically the most helpful thing I saw said to just send 301 headers for the new location. How can I tell GoogleBot that a subdirectory is now a subdomain? If that is not an option, how can I more efficiently send 301 redirects out for a particular subdomain? I was thinking perhaps the Nginx server but I'm not sure that I can run both Apache and Nginx side by side on port 80 for different subdomains.

    Read the article

  • Check parameters annotated with @Nonnull for null?

    - by David Harkness
    We've begun using FindBugs with and annotating our parameters with @Nonnull appropriately, and it works great to point out bugs early in the cycle. So far we have continued checking these arguments for null using Guava's checkNotNull, but I would prefer to check for null only at the edges--places where the value can come in without having been checked for null, e.g., a SOAP request. // service layer accessible from outside public Person createPerson(@CheckForNull String name) { return new Person(Preconditions.checkNotNull(name)); } ... // internal constructor accessed only by the service layer public Person(@Nonnull String name) { this.name = Preconditions.checkNotNull(name); // remove this check? } I understand that @Nonnull does not block null values itself. However, given that FindBugs will point out anywhere a value is transferred from an unmarked field to one marked @Nonnull, can't we depend on it to catch these cases (which it does) without having to check these values for null everywhere they get passed around in the system? Am I naive to want to trust the tool and avoid these verbose checks? Bottom line: While it seems safe to remove the second null check below, is it bad practice? This question is perhaps too similar to Should one check for null if he does not expect null, but I'm asking specifically in relation to the @Nonnull annotation.

    Read the article

  • Unable to use Maya animation with scripts when imported to Unity

    - by keshk
    I am testing to import Maya animation over to Unity. I set up a simple cylinder with 2 bones and an IK handle. Made a simple animation where the cylinder bends and goes back to straight position over 24 frames. Following that, I selected everything and baked, all bones,ik,(animation by selecting all at the graph editor) and even the cylinder. I saved the scene and then select all and export as FBX with animation and bake checked. In unity imported it and at the preview able to see the animation. When I load the model into scene and play (after assigning the controller), able to see animation too. But now when I try to script it and control the animation, nothing happens. Even to test, I tried the following under the Update method. if(animation.isPlaying) Debug.Log("Animation Works"); else Debug.Log("Animation not working"); The bool doesn't even return true nor false. My animation is called "bend", thus just for try I did the following and nothing happens. animation.Play("bend"); Can please advice based on my steps, am I missing something. Do I need to add the controller or is that an unnecessary step? Did I screw up on the Maya part or the Unity part. Thanks for help.

    Read the article

  • iptables unresolved dependencies

    - by tertle
    I'm trying to setup OpenVPN Access Server on a VPS running ubuntu 9.10 for a friend so she can play games from her uni campus. The problem is I keep running into this error when trying to start openvpn. Service deferred error: IPTablesServiceBase: failed to run iptables-restore [status=1]: ['FATAL: Could not load /lib/modules/2.6.18-028stab070.14/modules.dep: No such file or directory', 'FATAL: Could not load /lib/modules/2.6.18-028stab070.14/modules.dep: No such file or directory', 'iptables-restore: line 46 failed']: internet/base:1175,internet/base:752,internet/process:45,internet/process:306,internet/_baseprocess:48,internet/process:775,internet/_baseprocess:60,svc/pp:116,svc/svcnotify:26,internet/defer:238,internet/defer:307,internet/defer:323,sagent/ipts:105,sagent/ipts:39,util/error:52,util/error:32 service failed to start due to unresolved dependencies: set(['user', 'iptables_openvpn']) service failed to start due to unresolved dependencies: set(['user', 'iptables_openvpn']) service failed to start due to unresolved dependencies: set(['iptables_openvpn']) Now I've already got my provider to enabled the TUN/TAP device driver and I checked this using # cat /dev/net/tun Which returned “File descriptor in bad state” Which I believe means it's enabled. After extensive searching, I've been unable to find any solution other than people suggesting to make sure TUN/TAP device driver is enabled. Any ideas on how to solve my issue? I'm not very experience with linux and I feel in over my head here so any advice is greatly appreciated. --edit-- Just stumbled across this Not sure how I missed it earlier. I believe I need to get modprobe ipt_mark & modprobe ipt_MARK run on the hostnode by my provider. Is this correct and something I should try get done.

    Read the article

  • Turning a problem into a data

    - by Fogmeister
    OK, I have an app that I'm creating but I'm just really not sure about how to approach the problem. The idea is fairly simple. I'm just not sure how to wrap it in a data model (or even if I should). TBH I feel like I'm making it more complicated than it needs to be. How it works. The app will have circles along the top in a row that need to be connected to circles along the bottom in a row. 10 circles at the top. 10 at the bottom. One connection per pair of dots. Anyway, I can get the dots to connect I'm just not sure how to wrap it in a data model so that I can analyse what has been connected and see if it is right or not. The circles will be questions and answers. I can make an array of question objects with a question and answer properties. I can then display these as the dot pairs. I'm just not sure how to record which questions have been connected to which answers. It is valid for a user to connect a wrong answer as they all get checked at the end. I was thinking of using SpriteKit but this isn't a restriction. I could use UIKit or something else. TBH, this question is fairly language free as I'm just after a way of modelling it.

    Read the article

  • Compressing 2D level data

    - by Lucius
    So, I'm developing a 2D, tile based game and a map maker thingy - all in Java. The problem is that recently I've been having some memory issues when about 4 maps are loaded. Each one of these maps are composed of 128x128 tiles and have 4 layers (for details and stuff). I already spent a good amount of time searching for solutions and the best thing I found was run-length enconding (RLE). It seems easy enough to use with static data, but is there a way to use it with data that is constantly changing, without a big drop in performance? In my maps, supposing I'm compressing the columns, I would have 128 rows, each with some amount of data (hopefully less than it would be without RLE). Whenever I change a tile, that whole row would have to be checked and I'm affraid that would slow down too much the production (and I'm in a somewhat tight schedule). Well, worst case scenario I work on each map individually, and save them using RLE, but it would be really nice if I could avoind that. EDIT: What I'm currently using to store the data for the tiles is a 2D array of HashMaps that use the layer as key and store the id of the tile in that position - like this: private HashMap< Integer, Integer [][]

    Read the article

  • When is the default storage rule not really the default storage rule?

    - by Kevin Smith
    In 11g WebCenter Content (WCC) introduced dispersion rules in the vault and weblayout directory paths to better distribute content across the directories. The dispersion rule was based on dRevClassID. The only problem with this is that dRevClassID did not remain the same when you copied content from one WCC instance to another using Archiver like in a contribution-consumption scenario. This could cause problems because the web-viewable path would not be the same between the contribution and consumption instances. In the PS5 (11.1.1.6.0) release of WCC they addressed this by configuring the File Store Provider (FSP) so that all new content would use a storage rule with a dispersion rule based on dDocName, which would stay the same when content was copied to another WCC instance. To support migration from older versions of WCC they left the default storage rule unchanged and created a new storage rule called DispByContentId and made that the default storage rule for all new content. I only stumbled upon this a while back when I was trying to change the FSP configuration so that all content used a webless storage rule. I changed the default storage rule, restarted WCC, and checked in a new content item. To my surprise the new content was not created as webless. I struggled with this for a while until I noticed there were multiple storage rules defined in the FSP configuration. When I looked at the default value for the xStorageRule field in Configuration Manager, sure enough it was no longer default, but was now DispByContentId. Once I updated the DispByContentId storage rule to webless and restarted WCC all my new content was now created using the webless storage rule, just like I wanted. I noticed when I was creating this blog post that the default storage rule is also listed on the File Store Provider Information page, but I guess I didn't see that when I originally did this.

    Read the article

  • Windows Azure Horror Story: The Phantom App that Ate my Data

    - by jasont
    Originally posted on: http://geekswithblogs.net/jasont/archive/2013/10/31/windows-azure-horror-story-the-phantom-app-that-ate-my.aspxI’ve posted in the past about how my company, Stackify, makes some great tools that enhance the Windows Azure experience. By just using the Azure Management Portal, you don’t get a lot of insight into what is happening inside your instances, which are just Windows VMs. This morning, I noticed something quite scary, and fittingly enough, it’s Halloween. I deleted a deployment after doing a new deployment and VIP swap. But, after a few minutes, the instance still appeared in my Stackify dashboard. I thought this odd, as we monitor these operations and remove devices that have been deleted. Digging in showed that it wasn’t a problem with Stackify. This instance was still there, and still sending data. You’ll notice though that the “Status” check shows that the instance no longer exists. And it gets scarier. I quickly checked the running services, and sure enough, the Windows Azure Guest Agent (which runs worker roles) was still running: “Surely, this can’t be” I’m thinking at this point. Knowing that this worker role is in our QA environment, it has debug logging turned on. And, using our log file tailing feature, I can see that, yes, indeed, this deleted deployment is still executing code. Pulling messages off queues. Sending alerts. Changing data. Fortunately, our tools give me the ability to manually disable the Windows service that runs this code. In the meantime, I have contacted everyone I know (and support) at Microsoft to address this issue. Needless to say, this is concerning. When you delete a deployment, it needs to actually delete. Not continue to eat your data. As soon as I learn more, I’ll be posting an update.

    Read the article

  • Dell Mini 9 Integrated Microphone -- not working

    - by Josh Eismanaf
    Im running the latest Debian version, 6.0 on my Dell Mini 9.Webcam and sound work, but microphone is a no-go. I hope even though it is Debian, fellow Linux users can help. In Alsamixer (1.0.23), I have Capture, Digital, and Mic settings all the way up. There are no options for adjusting "Front" mic settings. (May have been for older alsamixer versions?) In Sound Recorder, playback has loud noise/feedback? Under Preferences in Volume Control under the Recording tab, both Capture and Digital are permanently checked, regardless if I uncheck them. I'm not sure how to interpret the above, but I am just trying to offer as much relevant background information as I can. I've been scouring forums for the answer, but to no avail. Most questions/answers are from 2008, and couldn't find a solution. I'm not very handy with the Linux machine, but love to learn / am learning. Also, I've attempted the solution here, and it didn't work. Thanks, in advance, for your help!

    Read the article

  • Unity 3D does not work on Dell system with a AMD Radeon HD 6470M

    - by VeeKay
    I am running 64 bit Ubuntu on Dell with 1GB graphic card. I login with "Ubuntu" hoping to see Unity 3d but it doesn't. Unity 2D runs instead. when I type in echo "$DESKTOP_SESSION" it confirms the Unity-2D. I've checked the System info that shows like : The graphics row shows itself as empty. SO I've presumed that the graphic drivers aren't detected and hence I went to Unity- Additional Drivers and installed the fglrx driver that the UI has suggested. Even after installing so, the graphics part in System info details shows nothing and still Unity 2D runs in spite of all the effort. Please help! How can I get my Unity 3D back? Hardware Info Video Card : AMD Radeon™ HD 6470M - 1GB (For ICC) RAM : 6GB (1 X 2GB + 1 X 4GB) 2 DIMM DDR3 1333Mhz OS : 64 bit Ubuntu 11.10 Edit : Output for /usr/lib/nux/unity_support_test -p X Error of failed request: BadRequest (invalid request code or no such operation) Major opcode of failed request: 155 (GLX) Minor opcode of failed request: 19 (X_GLXQueryServerString) Serial number of failed request: 21 Current serial number in output stream: 21

    Read the article

  • What can be done to decrease the number of live issues with applications?

    - by User Smith
    First off I have seen this post which is slightly similar to my question. : What can you do to decrease the number of deployment bugs of a live website? Let me layout the situation for you. The team of programmers that I belong to have metrics associated with our code. Over the last several months our errors in our live system have increased by a large amount. We require that our updates to applications be tested by at least one other programmer prior to going live. I personally am completely against this as I think that applications should be tested by end users as end users are much better testers than programmers, I am not against programmers testing, obviously programmers need to test code, but they are most of the times too close to the code. The reason I specify that I think end users should test in our scenario is due to the fact that we don't have business analysts, we just have programmers. I come from a background where BAs took care of all the testing once programmers checked off it was ready to go live. We do have a staging environment in place that is a clone of the live environment that we use to ensure that we don't have issues between development and live environments this does catch some bugs. We don't do end user testing really at all, I should say we don't really have anyone testing our code except programmers, which I think gets us into this mess (Ideally, we would have BAs or QA or professional testers test). We don't have a QA team or anything of that nature. We don't have test cases for our projects that are fully laid out. Ok, I am just a peon programmer at the bottom of the rung, but I am probably more tired of these issues than the managers complaining about them. So, I don't have the ability to tell them you are doing it all wrong.....I have tried gentle pushes in the correct direction. Any advice or suggestions on how to alleviate this issue ?

    Read the article

  • Using local repository with vmbuilder and https

    - by Onitlikesonic
    I seem to be having problems using vmbuilder with a local https mirror "--mirror=https:///archive.ubuntu.com/ubuntu/" as shown below: Process (['/usr/sbin/debootstrap', '--arch=amd64', 'precise', '/tmp/tmpYc0cOktmpfs', '<my_internal_server>/ubuntu/']) returned 1. stdout: I: Retrieving Release E: Failed getting release file <my_internal_server>/ubuntu/dists/precise/Release , stderr: 2012-10-18 10:36:36,429 INFO : Unmounting tmpfs from /tmp/tmpYc0cOktmpfs Traceback (most recent call last): File "/usr/bin/vmbuilder", line 24, in <module> cli.main() File "/usr/lib/python2.7/dist-packages/VMBuilder/contrib/cli.py", line 216, in main distro.build_chroot() File "/usr/lib/python2.7/dist-packages/VMBuilder/distro.py", line 83, in build_chroot self.call_hooks('bootstrap') File "/usr/lib/python2.7/dist-packages/VMBuilder/distro.py", line 67, in call_hooks call_hooks(self, *args, **kwargs) File "/usr/lib/python2.7/dist-packages/VMBuilder/util.py", line 165, in call_hooks getattr(context, func, log_no_such_method)(*args, **kwargs) File "/usr/lib/python2.7/dist-packages/VMBuilder/plugins/ubuntu/distro.py", line 136, in bootstrap self.suite.debootstrap() File "/usr/lib/python2.7/dist-packages/VMBuilder/plugins/ubuntu/dapper.py", line 269, in debootstrap run_cmd(*cmd, **kwargs) File "/usr/lib/python2.7/dist-packages/VMBuilder/util.py", line 120, in run_cmd raise VMBuilderException, "Process (%s) returned %d. stdout: %s, stderr: %s" % (args.__repr__(), status, mystdout.buf, mystderr.buf) VMBuilder.exception.VMBuilderException: Process (['/usr/sbin/debootstrap', '--arch=amd64', 'precise', '/tmp/tmpYc0cOktmpfs', '<my_internal_server>/ubuntu/']) returned 1. stdout: I: Retrieving Release E: Failed getting release file <my_internal_server>/ubuntu/dists/precise/Release , stderr: I've checked that the files are in the correct place and i'm able to setup this using http instead of https. However this server will be providing https access only to the repos, the http is only temporarily open. This might be due to the certificate not being valid on the https (since it's self signed) or due to the fact that vmbuilder doesn't support https? In either case how can i get this to work? (If it's the case of the invalid certificate I don't mind ignoring any checks)

    Read the article

  • HTML5 media loading sometimes suspends or aborts: misconfigured Apache?

    - by Joan Botella
    Recently, some code that has been working fine for months started to run unexpectedly. That code is just a media files loading JavaScript function, that uses jQuery. It's pretty long, but in essence it is like this: var $audio=$('<audio>'); $audio.on('canplaythrough',function(e){ $audio[0].play(); }); $audio.attr('src','song.ogg'); Basically, the file only loads sometimes, and sometimes stops loading with a suspend or even an abort event. I have uploaded a little testing HTML to http://www.joanbotella.com/tests/loading , where you can see what's happening. You can download the test files from http://www.joanbotella.com/tests/loading/loadingTest.zip for local testing. I have just checked that opening the test index.html file directly into Firefox, and not through my localhost Apache server, makes the audio files perfectly playable. So, I assume, my hosting and I have the Apache server misconfigured for serving media files. My software versions are: Apache 2.2.22-1ubuntu1.7 , Mozilla Firefox 31.0 , Chromium 36.0.1985.125 and jQuery 1.11.0. Can you help me? Thanks in advance!

    Read the article

< Previous Page | 144 145 146 147 148 149 150 151 152 153 154 155  | Next Page >