Search Results

Search found 15591 results on 624 pages for 'problems'.

Page 279/624 | < Previous Page | 275 276 277 278 279 280 281 282 283 284 285 286  | Next Page >

  • Google Programmers

    - by seth
    As a soon-to-be software engineer, I have studied some languages during my time in college. I can name C, C++, Java, Scheme, Ruby, PHP, for example. However, one of the main principles in my college (recognized by many as the best in my country) is to teach us how to learn for ourselves and how to search the web when we have a doubt. This leads to a proactive attitude, when I need something, I go get it and this has worked so far for me. Recently, I started wondering though, how much development would I be able to do without internet access. The answer bugged me quite a bit. I know the concept of the languages, mostly I know what to do, but I was amazed by how "slow" things were without having the google to help in the development. The problem was mostly related to specific syntax but it was not without some effort that i solved some of the SPOJ problems in C++. Is this normal? Should I be worried and try to change something in my programming behaviour? UPDATE: I'll give a concrete example. Reading and writing to a file in java. I have done this about a dozen times in my life, yet every time I need to do it, I end up googling "read file java" and refreshing my memory. I completely understand the code, i fully understand what it does. But I am sure, that without google, it would take me a few tries to read and write correctly (if I had to sit in front of the screen with a blank page and write this without consulting any source whatsoever).

    Read the article

  • Cross Apply Ambiguity

    - by Dave Ballantyne
    Cross apply (and outer apply)  are a very welcome addition to the TSQL language.  However, today after a few hours of head scratching, I have found an simple issue which could cause big big problems. What would you expect from this statement ? select * from sys.objects b join sys.objects a on a.object_id = object_id No prizes for guessing SQL server errors with “Ambiguous column name 'object_id'”. What would you expect from this statement ? Select * from sys.objects a cross apply( Select * from sys.objects b where b.object_id = object_id) as c Surprisingly, perhaps, the result is a cross join of sys.objects.  Well, what happened there ? If you look at the apply statement, within the where clause, only one of the conditions is qualified with a table name.  This meant that is has be interpreted as “b.object_id = b.object_id” causing the cross apply to have no join the the parent sys.objects table and causing the cross join. The fix is , obviously, simple Select * from sys.objects a cross apply( Select * from sys.objects b where b.object_id = a.object_id) as c So why no “Ambiguous column name ” error ?  I’ve raised a connect item on this issue here.

    Read the article

  • Will I need a dedicated static IP or a unique IP is enough to SSL enable my website?

    - by Devner
    Hi, This is the first time I am dealing with SSL and Dedicated Static IP /Unique IP. Now this webhost says that they will provide Unique IP (not shared with other customers) but do NOT guarantee that it will be static. Now I plan to make my website SSL enabled and install a SSL certificate. So in order to SSL enable my website, will I really need a Dedicated Static IP or will this Unique IP (without the guarantee that it will be static) be enough? What problems will I need to face if the IP is not static? I have already bought hosting from them. And they showed me that option while adding optional services to the account (after I placed my order), so I did not even have a clue about this. Thank you all in advance.

    Read the article

  • Question about mipmaps + anisotropic filtering

    - by Telanor
    I'm a bit confused here and maybe someone can explain this to me. I created a simple test texture for my terrain which is nothing more than a solid green color with a black grid overlayed on top of it. If I look at the terrain in the distance with mipmapping on and linear filtering, the grid lines become blurry fairly quickly and further back the grid is pretty much invisible. With these settings, I don't get any moire patterns at all. If I turn on anisotropic filtering, however, the higher the anisotropic level, the more the terrain looks like it did with without mipmapping. The lines are much crisper nearby but in the distance I start to see terrible moire patterns. My understanding was that mipmapping is supposed to get rid of moire patterns. I've always had anisotropic filtering on in every game I play and I've never noticed any moire patterns as a result, so I don't understand why it's happening in my game. I am using logarithmic depth however, could that be causing any problems? And if it is, how do I resolve it? I've created my sampler state like so (I'm using slimdx): ssa = SamplerState.FromDescription(Engine.Device, new SamplerDescription { AddressU = TextureAddressMode.Clamp, AddressV = TextureAddressMode.Clamp, AddressW = TextureAddressMode.Clamp, Filter = Filter.Anisotropic, MaximumAnisotropy = anisotropicLevel, MinimumLod = 0, MaximumLod = float.MaxValue });

    Read the article

  • WebDAV "PROPFIND" exception in IIS due to network share?

    - by jacko
    We're finding continuous exceptions in our event viewer on our live box to the following exception: [snippet] Process information: Process ID: 3916 Process name: w3wp.exe Account name: NT AUTHORITY\NETWORK SERVICE Exception information: Exception type: HttpException Exception message: Path 'PROPFIND' is forbidden. Thread information: Thread ID: 14 Thread account name: OURDOMAIN\Account Is impersonating: True Stack trace: at System.Web.HttpMethodNotAllowedHandler.ProcessRequest(HttpContext context) at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) Other Specs: Windows Server 2003 R2 & IIS 6.0 We've narrowed it down to occuring when people try to access shares on the box from within the network, and have discovered (we think) that its due to the WebDAV web services extension being previously disabled by past staff. The exceptions are being thrown when trying to access directories that are virtual dirs in IIS, and plain old UNC network shares What the implications for enabling the WebDAV extensions on our live web server? And will this solve our problems with the exceptions in our event log?

    Read the article

  • Export-Mailbox - "an unknown error has occurred"

    - by grojo
    I am trying to move messages from a rather large mailbox to an archive mailbox. However I run into errors all the time. the command I am executing is Export-Mailbox -Identity MAILBOX_FROM -TargetMailbox ARCHIVE -TargetFolder ARCHIVE_FOLDER -StartDate 2009-02-01 -EndDate 2009-02-28 -DeleteContent -Confirm:$false I can copy/move some messages, but run into frequent "an unknown error has occurred" (statuscode -1056749164) I run the console as administrative user, and all permissions are set right, as far as I can tell. I've restricted the start and end dates in case the number of messages moved/deleted should create problems. Anything I am missing in my setup? Corrupted messages? Over-limit message sizes?

    Read the article

  • Kubuntu 12.04 - DNS Issues

    - by AndrewJesaitis
    Starting yesterday (6/11/12), I've been having many network problems. When requesting a page in chrome, the page hangs on "Sending request" and then will eventually timeout. I'm within a VPN that has it's own DNS server. I've tried to manually set my DNS through the Network-Manager applet and by editing /etc/network/interfaces. Having no luck I unlinked the resolv.conf file and dumped the contents of my old resolv.conf into it. Again having no luck, I deactivated the dnsmasq server in /etc/NetworkManager/NetworkManager.conf by commenting out the dns=dnsmasq. $ cat NetworkManager.conf [main] plugins=ifupdown,keyfile #dns=dnsmasq no-auto-default=D0:67:E5:EA:B6:6B, [ifupdown] managed=false $ nm-tool NetworkManager Tool State: connected (global) - Device: eth0 [Wired connection 1] ------------------------------------------- Type: Wired Driver: tg3 State: connected Default: yes HW Address: D0:67:E5:EA:B6:6B Capabilities: Carrier Detect: yes Speed: 1000 Mb/s Wired Properties Carrier: on IPv4 Settings: Address: 192.168.254.122 Prefix: 24 (255.255.255.0) Gateway: 192.168.254.2 DNS: 192.168.254.1 What is strange is that the network will work fine for a few minutes then start to timeout. A few minutes later it will work again. I'm unable to hit internal or external sites when it is timing out. When I $dig local sites, I receive no answer. I do receive an answer from google.com. At this point, I would usually blame the DNS Server, especially since when I change to Google's DNS server things work. But, I need to use our internal DNS to hit our internal sites. Nobody else is having issues and they are all using DHCP. This group includes one user who is using 11.04. At this point, I'm at a loss for what to do, so any help would be appreciated.

    Read the article

  • Spiceworks versus Request Tracker?

    - by dmackey
    We currently utilize Request Tracker for help desk ticketing, we utilize Spiceworks for asset inventorying. I am pondering whether it might be worthwhile to move from RT to Spiceworks for help desk as well. Has anyone used both systems and can provide some insight into any benefits/problems with either system? Or has general philosophical reasons why one should use one solution over the other? Of course, RT is open source and Spiceworks is not - and usually this would be a major item for me - but since Spiceworks is free and takes community involvement fairly actively its not as major of a concern for me (personally).

    Read the article

  • Can I force NFS automounts to use NFSv3?

    - by Steve
    I have a linux server that is exporting NFSv4 as well as NFSv3. I have a Fedora14 client that is defaulting to NFSv4 when automounting NFS shares off of the linux server, and it seems to be causing some problems. All my other linux clients on the network are mounting via NFSv3 without issue, so is there a way I can tell automount to mount the share via v3? I am pulling my automount maps via LDAP, with an entry in my /etc/auto.master file like so: +auto_master, so I assume it's a bit different than listing options with a regular automount map? (.i.e. /home --nfsvers=3 fileserver:/DATA)

    Read the article

  • importing animations in Blender, weird rotations/locations

    - by user975135
    This is for the Blender 2.6 API. There are two problems: 1. When I import a single animation frame from my animation file to Blender, all bones look fine. But when I import multiple (all of the frames), just the first one looks right, seems like newer frames are affected by older ones, so you get slightly off positions/rotations. This is true when both assigning PoseBone.matrix and PoseBone.matrix_basis. bone_index = 0 # for each frame: for frame_index in range(frame_count): # for each pose bone: add a key for bone_name in bone_names: # "bone_names" - a list of bone names I got earlier pose.bones[bone_name].matrix = animation_matrices[frame_index][bone_index] # "animation_matrices" - a nested list of matrices generated from reading a file # create the 'keys' for the Action from the poses pose.bones[bone_name].keyframe_insert('location', frame = frame_index+1) pose.bones[bone_name].keyframe_insert('rotation_euler', frame = frame_index+1) pose.bones[bone_name].keyframe_insert('scale', frame = frame_index+1) bone_index += 1 bone_index = 0 Again, it seems like previous frames are affecting latter ones, because if I import a single frame from the middle of the animation, it looks fine. 2. I can't assign armature-space animation matrices read from a file to a skeleton with hierarchy (parenting). In Blender 2.4 you could just assign them to PoseBone.poseMatrix and bones would deform perfectly whether the bones had a hierarchy or none at all. In Blender 2.6, there's PoseBone.matrix_basis and PoseBone.matrix. While matrix_basis is relative to parent bone, matrix isn't, the API says it's in object space. So it should have worked, but doesn't. So I guess we need to calculate a local space matrix from our armature-space animation matrices from the files. So I tried multiplying it ( PoseBone.matrix ) with PoseBone.parent.matrix.inverted() in both possible orders with no luck, still weird deformations.

    Read the article

  • OpenSuSE 11 iscsi target: HFS+ partition not seen by clients

    - by radiopaque
    I have an openSuSE machine as a file server, which has an Areca 1880i inside. It contains several partitions. There is a dm-0 and a dm-1 partition, for example. The partitions are formatted as EFI system partitions, with HFS+ file systems. My opensuse could not read them but the iscsitarget exported them for my Macs. This worked for more than a year. For some reason now, after some network problems which were "solved", my dm-0 partition is not seen anymore! I suspect it is a problem on the iscsi target side, i.e. the OpenSuSE machine. Can anyone suggest what I should look into? Any logs, any settings on the linux machine? None of my macs can access the partition, and they use different client software!! Thanks!

    Read the article

  • How to view bad blocks on mounted ext3 filesystem?

    - by Basilevs
    I've ran fsck -c on the (unmounted) partition in question a while ago. The process was unattended and results were not stored anywhere (except badblock inode). Now I'd like to get badblock information to know if there are any problems with the harddrive. Unfortunately, partition is used in the production system and can't be unmounted. I see two ways to get what I want: Run badblocks in read-only mode. This will probably take a lot of time and cause unnecessary bruden on the system. Somehow extract information about badblocks from the filesystem iteself. How can I view known badblocks registered in mounted filesystem?

    Read the article

  • Is "as long as it works" the norm?

    - by q303
    Hi, My last shop did not have a process. Agile essentially meant they did not have a plan at all about how to develop or manage their projects. It meant "hey, here's a ton of work. Go do it in two weeks. We're fast paced and agile." They released stuff that they knew had problems. They didn't care how things were written. There were no code reviews--despite there being several developers. They released software they knew to be buggy. At my previous job, people had the attitude as long as it works, it's fine. When I asked for a rewrite of some code I had written while we were essentially exploring the spec, they denied it. I wanted to rewrite the code because code was repeated in multiple places, there was no encapsulation and it took people a long time to make changes to it. So essentially, my impression is this: programming boils down to the following: Reading some book about the latest tool/technology Throwing code together based on this, avoiding writing any individual code because the company doesn't want to "maintain custom code" Showing it and moving on to the next thing, "as long as it works." I've always told myself that next job I'm going to get a better shop. It never happens. If this is it, then I feel stuck. The technologies always change; if the only professional development here is reading the latest MS Press technology book, then what have you built in 10 years but a superficial knowledge of various technologies? I'm concerned about: Best way to have professional standards How to develop meaningful knowledge and experience in this situation

    Read the article

  • fix vmware workstation 9 installation in ubuntu 12.10

    - by Alessandro Belloni
    i have opened this thread because i upgraded to ubuntu 12.10 beta (kernel 3.5) and i have problem with vmware workstation 9: "Unable to change virtual machine power state: Cannot find a valid peer process to connect to" does anyone got the same problem? Clean install of ubuntu 12.10 (daily build) installed vmware 9 and patched but not working, my laptop is a Lenovo T420 with Nvidia Optimus Technology. i can't patch correctly and get the thinks be builded correctly, my configuration is ubuntu 12.10 fresh installation vmware workstation 9 fresh install on top of a lenovo thikpad t420 with nvidia optimus video card. have a problems.. this message is show whem i try to apply the patch.. # Stopping VMware services: VMware Authentication Daemon done At least one instance of VMware VMX is still running. Please stop all running instances of VMware VMX first. VMware Authentication Daemon done Unable to stop services # How can i stop the vmware services to apply the patch? This message is too show when i try to patch again # ./patch-modules_3.5.0.sh /usr/lib/vmware/modules/source/.patched found. You have already patched your sources. Exiting # But the vmware is not working, and i can’t unstall…

    Read the article

  • Ubuntu 12.04 MySQL 5.5 MyODBC 5.1 or 3.1 query hangs

    - by jorgearr
    I have been able to install Ubuntu 12.04 with LAMP MySQL version 5.5.x It works fine within linux, it allows me to connect from myodbc windows vista or windows 7 I have configured networking access and have been able to access from windows vista using putty and other tcp connections like mysql query browser. I have also configured or disabled ufw firewall and apparmor. The connection works fine until I query data from the tables. It lets me query small amounts of data like: SELECT name FROM users limit 20 but if I do a SELECT * FROM users, it goes on a never-ending loop. This happens even on tables with very few records like 5 or even less. The problems occur with windows because I tried ssh from linux mint and it worked fine. I need to be able to work using MyODBC either 3.51 or 5.1 since my client program is made in VB6 and connects to mysql server via tcp/ip. The server is an HP PROLIANT ML350G6 with Intel Xeon 64 bits. I tried several ubuntu server version (12.04 64bit, 10.10 64bit, 11.04 32bit) and none has worked I even tried CentOS 6.3 and the same. As a reference, it works fine with onother ubuntu server version 6.x on HP Proliant 150 and mysql 5.0.x that is like 7 years old and never updated. Help Please.

    Read the article

  • Have to run sudo dhclient eth0 automatically every boot

    - by Fyksen
    I just installed ubuntu 12.04.1 alternative install (for raid 0 on some disks). I Have some problems with the net. I'm at school, we use cable, and it got IPv6. If I run ifconfig eth0 heres my output: eth0 Link encap:Ethernet HWaddr e0:cb:4e:87:ff:db inet addr:128.39.194.217 Bcast:128.39.194.223 Mask:255.255.255.224 inet6 addr: 2001:700:1100:8008:e2cb:4eff:fe87:ffdb/64 Scope:Global inet6 addr: fe80::e2cb:4eff:fe87:ffdb/64 Scope:Link inet6 addr: 2001:700:1100:8008:48f7:c23:1d87:da6c/64 Scope:Global UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1063378 errors:0 dropped:0 overruns:0 frame:0 TX packets:489811 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1577173461 (1.5 GB) TX bytes:37043669 (37.0 MB) Interrupt:68 Base address:0x6000 My /etc/network/interfaces look like this: # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto eth0 # NetworkManager#iface eth0 inet dhcp # NetworkManager#hostname 2001:700:1100:1::4 # This is an autoconfigured IPv6 interface iface eth0 inet6 auto (I had to remove the hash tags, because of the BIGFONT i get on ask ubuntu) The "network manager" says that I'm not connected. Let me know if you need any more information. :)

    Read the article

  • Using the link command to keep backups on another drive

    - by Xavier
    I have a folder that contains a not so large amount of space called /data/backup. I have been told that if I link that folder (/data/backup) to an even bigger folder area like /bigdata/backup for example, that I will be able to execute backups to the /data/backup folder. It will then just create a link, but the data will be seen in both folders and the latter one (/bigdata/backup) will contain the backup results but it will show on both folders. Since the /bigdata/backup has far more disk space then the backup will no longer fail because of space problems in the /data/backup one. Is this true?

    Read the article

  • What is the best nginx compression gzip level?

    - by Chamnap
    I'm using nginx reverse proxy cache with gzip enabled. However, I got some problems from android applications http requests to my rails json web service. It seems when I turn off reverse proxy cache, it works ok because the response header comes without gzip. Therefore, I think the problem caused from gzip. What is the most appropriate level of gzip compression? gzip on; gzip_http_version 1.0; gzip_vary on; gzip_comp_level 6; gzip_proxied any; gzip_types text/plain text/css text/javascript application/javascript application/json application/x-javascript text/xml application/xml application/xml+rss;

    Read the article

  • In Scrum, should you split up the backlog in a functional backlog and a technical backlog or not?

    - by Patrick
    In our Scrum teams we use a backlog, which mostly contains functional topics, but also sometimes contains technical topics. The advantage of having 1 backlog is that it becomes easy to choose the topics for the next sprint, but I have some questions: First, to me it seems more logical to have a separate technical backlog, where developers themselves can add pure technical items, like: we could improve performance in this method, this class lacks some technical documentation, ... By having one backlog, all developers always have to pass via the product owner to have their topics added to the backlog, which seems additional, unnecessary work for the product owner. Second, if you have a product owner that only focuses on the pure-functional items, the pure-technical items (like missing technical documentation, code that erodes and should be refactored, classes that always give problems during debugging because they don't have a stable foundation and should be refactored, ...) always end up at the end of the list because "they don't serve the customer directly". By having a separate technical backlog, and time reserved in every sprint for these pure technical items, we can improve the applications functionally, but also keep them healthy inside. What is the best approach? One backlog or two?

    Read the article

  • HTTP 400 error for all websites

    - by Jason Sherman
    A couple of days ago, I started getting HTTP 400 responses from all websites. Nothing will go across port 80. However, everything works if I connect to VPN. The weird thing is, without VPN, other things still work; such as IM and anything else that doesn’t use port 80. Pinging also works. I haven’t noticed this behavior on any other computer on my network. The kicker is, if I log on as a local admin, everything works fine!!! I haven’t installed anything in the last couple weeks and I don’t remember changing any configuration. I ran Forefront and HouseCall and neither found any problems.

    Read the article

  • How to drastically improve code coverage?

    - by Peter Kofler
    I'm tasked with getting a legacy application under unit test. First some background about the application: It's a 600k LOC Java RCP code base with these major problems massive code duplication no encapsulation, most private data is accessible from outside, some of the business data also made singletons so it's not just changeable from outside but also from everywhere. no business model, business data is stored in Object[] and double[][], so no OO. There is a good regression test suite and an efficient QA team is testing and finding bugs. I know the techniques how to get it under test from classic books, e.g. Michael Feathers, but that's too slow. As there is a working regression test system I'm not afraid to aggressively refactor the system to allow unit tests to be written. How should I start to attack the problem to get some coverage quickly, so I'm able to show progress to management (and in fact to start earning from safety net of JUnit tests)? I do not want to employ tools to generate regression test suites, e.g. AgitarOne, because these tests do not test if something is correct.

    Read the article

  • Best suited multi-function printer for Linux usage from a few choices

    - by Nakedible
    I want a cheap multi-function printer for Linux usage. I'm looking for rock solid scanning and printing that works with big images. I'd prefer drivers that are available in Debian, or other drivers that are open source, but will settle for proprietary drivers if they are well contained and clean. Some choices I have are: Samsung SCX-4300 HP LaserJet M1120 MFP Samsung SCX-4500 Canon i-SENSYS MF4010 Brother DCP-7040 I am also interested in opinions what printer communication language is best for Linux usage for cheap printers. PostScript is nice, of course, but low-end PostScript printers often have problems when printing complex (large) PostScript files. It seems Samsung printers use SPL for communication, HP uses XQX and ZJS, then there's ofcourse PCL.

    Read the article

  • Tips and tricks to make NX server more stable

    - by gareth_bowles
    My shop has been using the FreeNX server on Fedora 11 for a while now and mostly getting good results, especially with performance, but we have some annoying problems with client connections. There are two main issues: Client sessions sometimes freeze after a long time (seems to be at least 2 hours of having the session active) We often have to make multiple attempts to start a new client session, especially if a previous session was suspended rather than terminated. In qwuite a few cases, we've had to restart the NX server to get around this. Our NX server configuration is the default except that we've enabled logging level 7 to /var/log/nxserver.log, and set the font server to "unix:/7100" so that it uses xfs. Does anyone have any ideas for making things more stable ?

    Read the article

  • How can I use dynamic routing with openvpn tunnels?

    - by pQd
    i'm thinking about using dynamic routing [ OSPF or RIP ] via OpenVPN tunnels. right now i have few offices connected in full mesh, but this is not scalable solution as we add more locations. i would like to avoid situation when plenty of internal traffic is affected if one of two vpn termination points that i plan to use is down. do you have similar configuration working in production? if so - what routing daemon did you use - quagga? something else? did you encounter any problems? thanks!

    Read the article

  • What does a red icon in XP's "Unlock Computer" dialog mean?

    - by wikiti
    A user was working from home and had a colleague turn on her computer so she could remote desktop to it. All worked fine, but when she came into the office and used her computer for a while then locked it the computer icon had a red screen, instead of blue. Like in the following mockup: Mockup of red computer screen. It didn't cause any problems and it went away when she rebooted, but I was intrigued to find out whether there was something that caused it or if it was just a windows oddity. I believe she just closed the remote desktop session (without really logging off) from home and then disconnected from the VPN before coming to the office. Any ideas?

    Read the article

< Previous Page | 275 276 277 278 279 280 281 282 283 284 285 286  | Next Page >