Search Results

Search found 68825 results on 2753 pages for 'problem'.

Page 354/2753 | < Previous Page | 350 351 352 353 354 355 356 357 358 359 360 361  | Next Page >

  • Dropbox and IIS

    - by DigitalAce7
    I'm running into the problem of using Dropbox and IIS. I am using the Dropbox Folder Sync plugin to sync a folder outside the Dropbox folder. They are synced into my inetpub\root\downloads\ directory. The problem is not that it doesn't sync. The problem is that the files that get synced have no permissions attached to them. I can't open any of them on IIS. Initially the downloads were to be only PDFs but they don't open so I tried an .ASPX file and it fails to load as well. Is there any way around this or another program I can use to sync files but allow them to be opened on an IIS server. Thanks for any help!

    Read the article

  • Dualboot - Windows partition does not work [0xc000000e] + video

    - by Chestnutjam
    Some may claim that this has already been asked, but I don't see MY problem in any of those posts. And if they actually have the same problem as mine, I may not understand what they've been trying to point out, actually. I hope you understand as to why I require a direct answer to the problem I'm having. Here's a video: https://www.youtube.com/watch?v=lyu_BsUTk5Q Some information that may help: This is a Lenovo laptop which came with Windows 8. I installed Ubuntu 14.04 along my Windows using a USB stick. It is possible to access my Windows (8.1) files from the folder page. I took off the Windows sticker, so I cannot get a Windows CD or anything from the support guys. I would delete my Windows partition then, but it's also tangled in this geeky mystery. Thank you in advance!

    Read the article

  • help me about install ubuntu server 12.04 on vmware

    - by zohreh
    I want to install Ubuntu server 64-bit on vmware 8.0.2 but I face with a problem ! I don't know why ! the problem is: From here you can choose to retry DHCP network autoconfiguration (which may succeed if your DHCP server takes a long time to respond) or to configure the network manually. Some DHCP servers require a DHCP hostname to be sent by the client, so you can also choose to retry DHCP network autoconfiguration with a hostname that you provide. there are 4 option for Continuation i don't know exactly to select which option also what is The cause of the problem? thanks a lot

    Read the article

  • 12.04 precise HDMI not working on ATI Radeon

    - by sfxpt
    This is precisely a 12.04 HDMI output problem, because it works for 11.10 - everything else are the same, only the OS version is different. The symptom is that on my HDMI output TV screen, it always shows Unsupported resolution even after setting it to the resolution that works on Ubuntu 11.10 (via gnome-control-center, in which it detects my TV just fine), I still don't see any output on TV. The audio is not working either. I think it is 12.04's problem, because if I boot into Ubuntu 11.10, and everything else remain the same, HDMI output (video & audio) work just fine. 12.04 is a freshly installed, just the vanilla Ubuntu HD installation. How should I nailed down / fix the problem?

    Read the article

  • Ubuntu 12.04 ATI 6450

    - by user210717
    Right now my video card isn't responding after the Ubuntu Logo is shown, (I see a black screen and that's it) I have installed ubuntu 12.04 AMD64 and if I remove the video card and use the VGA from the MOBO then I can use it with no problems, other data: 4 GB of ram 1333 500 GB WD AMD APU A6 3500 2,1 GHZ I forgot a couple of details, everything was working great until last night, when the light went off (I don't know if I'm explaining myself I'm from Argentina and english isn't my first language there was a power cut I meant) and then when it got back I used my pc until I went to bed (after upgrading) and this morning, when I woke up I had this problem for breakfast, I've been reading a little and I had a similar problem before and fixed it, but it was a system problem, a missing package or something, I don't remember, but here the only issue is that the video card doesn't give me image after the ubuntu logo.

    Read the article

  • what do you do when you are stuck at programming and you don't have access to internet? [migrated]

    - by minusSeven
    This is a question most of us have faced while programming. Getting stuck! It might be a programmatic problem or tool problem, most of us eventually face it. You know something is supposed to work some way but just doesn't. You tried a number of things to solve it but isn't helping and you are not sure why. I once remember being stuck hours at programming job. Eventually I figured out for some reason or other my IDE wasn't recompiling my new changes in some of the classes .This is just an example but I am sure most you have faced similar situation. So how would you go about solving it if you didn't have access to Google or Stack Exchange? Lets be honest, using internet you aren't solving the problem, somebody else is doing it for you. So if you didn't have access to internet or a friend who might help, how would you go about solving it?

    Read the article

  • Dual screen with JUST extended desktop

    - by c001os
    Im using a dual monitor set-up a lot, but i have a problem. I need to forbid to clone the dektop, i want to use just the extended desktop feature. Can it be done somehow? It's a problem because, when is start my system with two monitor it starts automaticaly with cloned desktop. When i use the hotkey to switch beatween monitors the same problem occours. Always going to the screen resolution options is pain in the *. I have a intel hd3000 videocard (sandy bridge) Thanx a lot

    Read the article

  • How to solve programming problems using logic? [closed]

    - by md nth
    I know these principles: Define the constrains and operations,eg constrains are the rules that you cant pass and what you want determined by the end goal, operations are actions you can do, "choices" . Buy some time by solving easy and solvable piece. Halving the difficulty by dividing the project into small goals and blocks. The more blocks you create the more hinges you have. Analogies which means : using other code blocks, yours or from other programmers . which has problem similar to the current problem. Experiments not guessing by writing "predicted end" code, in other word creating a hypothesis, about what will happen if you do this or that. Use your tools first, don't begin with a unknown code first. By making small goals you ll not get frustrated. Start from smallest problem. Are there other principles?

    Read the article

  • Dual boot Ubuntu and Windows 7, I Can only boot ubuntu through recovery mode

    - by Alec
    I want to become a new user of Ubuntu, however this problem is preventing me. I have/had Window 7 professional on my computer. I recently looked into getting linux. I discovered dual-booting and decided to give it a try. First I created a bootable flash drive with ubuntu 12.10 64 bit. I then followed the instructions on: https://help.ubuntu.com/community/WindowsDualBoot after I finished going through the setup, my computer rebooted. After the reboot I was able to select Ubuntu, advanced options for Ubuntu, 2 memory tests, and windows 7 (loader). So I chose Windows ( honestly i was more concerned that i still had everything on windows at this point). I then rebooted again and selected Ubuntu. When i selected Ubuntu, the background screen of Grub (the crimson/burgandy color) stayed for a few seconds then the screen went black: video here http://www.youtube.com/watch?v=6kKcG4sT7Lg&feature=plcp I tried again with the same results. so i redid the ubuntu install differently using http://www.liberiangeek.net/2012/10/dual-booting-windows-7-and-ubuntu-12-10-quantal-quetzal/. After rebooting the same thing happened. After that i was stumped, so i figured it could hurt to experiment. after all i backed up my windows 7 stuff, and i have the software disk. I tried booting in recovery mode under "advanced options for Ubuntu" and sure enough, after selecting continue to normal reboot it worked. So i updated and everything but when i rebooted it still wouldn't boot under Ubuntu. It would always boot after recovery mode. So i try installing 12.10 32 bit Ubuntu. the same problem keeps happening. I can still get to Ubuntu through recovery mode. so i went online and tried using the terminal (in ubuntu that i booted through recovery mode) when i was using it i discovered that "Error in sitecustomize; set PYTHONVERBOSE for traceback: EOFError: EOF read where not expected" kept showing up. also i noticed a notification in the top right corner that looked like a do not enter sign. it said "an error occured, please run package manager from the right-click menu or apt-get in a terminal to see what is wrong. the error message was: 'ror in sitecustomize;set PYTHONVERBOSE for traceback: EOFError: EOF read where not expected traceback (most recent called last): File "/usr/bin/lsb_release EOFError: EOF read where not expected 39;0' this usually means that your installed packages have unmet dependencies" Naturally i assumed this was what was causing my boot problems. I downloaded synaptic and updated everything and the error went away. but my boot problem was still a problem. so i go online find some things that have worked for others, like this Try to do this (in your terminal: sudo nano /etc/default/grub Look for: GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" Change it too : GRUB_CMDLINE_LINUX_DEFAULT="quiet" And update Grub: sudo update-grub This should fix stuff.) I did this and i still have the problem. sorry for the excessive explanation, please help.

    Read the article

  • Bose USB audio: crackling popping sound, eventually die

    - by Richard Barrett
    I've been trying to troubleshoot this issue for a while now. Any help would be much appreciated. I'm having trouble getting my Bose "Companion 5 multimedia speakers" working with my installation of Ubuntu 12.04 (link to Bose product here: http://www.bose.com/controller?url=/shop_online/digital_music_systems/computer_speakers/companion_5/index.jsp ). The issue seems to be low level (not just Ubuntu). What happens: When I boot into Ubuntu, I can get Rhythm box to play ok. However, if I try anything else (an .avi file, a webpage, or Clementine player with mp3 files) I get crackling, popping, or choppy sounds. If I move the mouse around, especially if it seems graphic intensive, the problem gets worse (more crackling noises). The more taxing it appears to be, the more likely it is that the sound will just die altogether until I reboot. For some reason the videos at www.bloomberg.com seem especially bad for it (my sound normally goes dead in under 45 seconds and won't work until reboot). Both my desktop running Ubuntu 12.04 and my laptop (running the same) have the same crackling problem. Troubleshooting so far: A friend of mine who knows linux well tried to solve it for me without any luck. He took pulseaudio out of the equation, but still had the problem just using AlSA. Among the many things he tried was adjusting the latency, but that didn't help either. I've also tried things like adjusting the USB device settings in the config file from -2 to -1 so that it will use my USB sound and I also commented out the lines that would stop that. These don't do anything. (That really seems like it's for someone who is getting no sound at all, so it's not surprising this won't work.) My friend's laptop running his Archlinux could play my Bose USB speakers without any problems. I also tried setting my daemon.conf file to use 6 channels (based on this http://lotphelp.com/lotp/configure-ubuntu-51-surround-sound ) but that didn't work either. I recently used a DVD to boot into Ubuntu Studio 12.04 (because it uses a live audio kernel) and this happened: I got perfect sound for a minute or two When I started moving windows around while sound was playing, the sound died again. Perhaps more interesting: There is a headphone out jack on the Bose system. When I use it, the audio is perfect for all applications (even the deadly bloomberg.com videos with .avi playing at the same time and moving around windows). Also, there is an audio-in jack on the Bose system. I can use a male-to-male mini jack to go from my soundcard's output to the Bose input and then all sound works perfectly. -However, it still requires the Bose to be plugged in to USB, otherwise I lose all sound. Any thoughts? Any suggestions for trouble shooting? (Or any suggestions for somewhere else to post to solve this?) Any logs or other files I can provide to help someone help me work this out? Your help is much appreciated! Rick BTW: I sometimes get people posting responses like "My Bose USB system works great with Ubuntu 12.04," without any more details. Is there anything I should ask such people to narrow down my problem? (It's kind of annoying to hear such a response because it doesn't help solve my problem.)

    Read the article

  • forEach and Facelets - a bugfarm just waiting for harvest

    - by Duncan Mills
    An issue that I've encountered before and saw again today seems worthy of a little write-up. It's all to do with a subtle yet highly important difference in behaviour between JSF 2 running with JSP and running on Facelets (.jsf pages). The incident I saw today can be seen as a report on the ADF EMG bugzilla (Issue 53) and in a blog posting by Ulrich Gerkmann-Bartels who reported the issue to the EMG. Ulrich's issue nicely shows how tricky this particular gochya can be. On the surface, the problem is squarely the fault of MDS but underneath MDS is, in fact, innocent. To summarize the problem in a simpler testcase than Ulrich's example, here's a simple fragment of code: <af:forEach var="item" items="#{itemList.items}"> <af:commandLink id="cl1" text="#{item.label}" action="#{item.doAction}"  partialSubmit="true"/> </af:forEach> Looks innocent enough right? We see a bunch of links printed out, great. The issue here though is the id attribute. Logically you can kind of see the problem. The forEach loop is creating (presumably) multiple instances of the commandLink, but only one id is specified - cl1. We know that IDs have to be unique within a JSF component tree, so that must be a bad thing?  The problem is that JSF under JSP implements some hacks when the component tree is generated to transparently fix this problem for you. Behind the scenes it ensures that each instance really does have a unique id. Really nice of it to do so, thank you very much. However, (you could see this coming), the same is not true when running with Facelets  (this is under 11.1.2.n)  in that case, what you put for the id is what you get, and JSF does not mess around in the background for you. So you end up with a component tree that contains duplicate ids which are only created at runtime.  So subtle chaos can ensue.  The symptoms are wide and varied, from something pretty obscure such as the combination Ulrich uncovered, to something as frustrating as your ActionListener just not being triggered. And yes I've wasted hours on just such an issue.  The Solution  Once you're aware of this one it's really simple to fix it, there are two options: Remove the id attribute on components that will cause some kind of submission within the forEach loop altogether and let JSF do the right thing in generating them. Then you'll be assured of uniqueness. Use the var attribute of the loop to generate a unique id for each child instance.  for example in the above case: <af:commandLink id="cl1_#{item.index}" ... />.  So one to watch out for in your upgrades to JSF 2 and one perhaps, for your coding standards today to prepare you for. For completeness, here's the reference to the underlying JSF issue that's at the heart of this: JAVASERVERFACES-1527

    Read the article

  • Resolve SRs Faster Using RDA - Find the Right Profile

    - by Daniel Mortimer
    Introduction Remote Diagnostic Agent (RDA) is an excellent command-line data collection tool that can aid troubleshooting / problem solving. The tool covers the majority of Oracle's vast product range, and its data collection capability is comprehensive. RDA collects data about the operating system and environment, including environment variable, kernel settings network o/s performance o/s patches and much more the Oracle Products installed, including patches logs and debug metrics configuration and much more In effect, RDA can obtain a snapshot of an Oracle Product and its environment. Oracle Support encourages the use of RDA because it greatly reduces service request resolution time by minimizing the number of requests from Oracle Support for more information. RDA is designed to be as unobtrusive as possible; it does not modify systems in any way. It collects useful data for Oracle Support only and a security filter is provided if required. Find and Use the Right RDA Profile One problem of any tool / utility, which covers a large range of products, is knowing how to target it against only the products you wish to troubleshoot. RDA does not have a GUI. Nor does RDA have an intelligent mechanism for detecting and automatically collecting data only for those Oracle products installed. Instead, you have to tell RDA what to do. There is a mind boggling large number of RDA data collection modules which you can configure RDA to use. It is easier, however, to setup RDA to use a "Profile". A profile consists of a list of data collection modules and predefined settings. As such profiles can be used to diagnose a problem with a particular product or combination of products. How to run RDA with a profile? ( <rda> represents the command you selected to run RDA (for example, rda.pl, rda.cmd, rda.sh, and perl rda.pl).) 1. Use the embedded spreadsheet to find the RDA profile which is appropriate for your problem / chosen Oracle Fusion Middleware products. 2. Use the following command to perform the setup <rda> -S -p <profile_name>  3. Run the data collection <rda> Run the data collection. If you want to perform setup and run in one go, then use a command such as the following: <rda> -vnSCRP -p <profile name> For more information, refer to: Remote Diagnostic Agent (RDA) 4 - Profile Manual Pages [ID 391983.1] Additional Hints / Tips: 1. Be careful! Profile names are case sensitive.2. When profiles are not used, RDA considers all existing modules by default. For example, if you have downloaded RDA for the first time and run the command <rda> -S you will see prompts for every RDA collection module many of which will be of no interest to you. Also, you may, in your haste to work through all the questions, forget to say "Yes" to the collection of data that is pertinent to your particular problem or product. Profiles avoid such tedium and help ensure the right data is collected at the first time of asking.

    Read the article

  • How can I fix my keyboard layout?

    - by Scott Severance
    For a long time, I've had my keyboard configured to use the layout currently known as "English (international AltGr dead keys)." I like this layout because without any modifier keys, it's identical to the US English keyboard, but when I hold Right Alt I can get accented letters and other characters not available on a standard US English keyboard. In Oneiric, however, the layout is messed up. Right Alt+N produces "ñ" as expected. And another method works: Right Alt+`, E produces "è", also as expected. But there's no way to type "é", which is probably the accented letter I type the most. I expect Right Alt+A, E to do the trick. But instead of a dead key for the acute accent, it uses a method for combining characters to create the hybrid "´e". This hybrid looks like the proper "é" in some settings, but it isn't the same character and doesn't always work. (For example, in the text input box as I type this, it looks the same as the proper character, but when displayed on the site for all so see, it looks very wrong--at least on my machine.) Ditto for all other characters with an acute accent, though some are available directly as pre-composed characters: For example, Right Alt+I yields "í". How can I change the acute accent on the A key to a proper dead key? Perhaps the more general version of this is: How can I tweak my keyboard layout? Update I just tested this on my other machine, also running Oneiric, but upgraded from previous versions. I have no problems with the second machine. The problem machine was a fresh install of Oneiric, but I kept my old $HOME when I did the fresh install. Clarification Even if an answer doesn't address my specific examples, I would still accept it if it provided enough detail for me to find the layout and tweak it according to my needs. Major Update After working through the information gained through Jim C's and Chascon's helpful replies, I've learned something new: The problem isn't with the layout itself, but with the fact that the selected layout isn't being applied. When I look at the definition in /usr/share/X11/xkb/symbols/us of the layout I've been running for a long time, I found that the definition doesn't match what I get when I type. In addition, the keyboard layout dialog that's supposed to show the current layout looks different from the way the layout is defined in the file I mentioned, and matches what actually happens when I type. Following Jim C's suggestion, I created a new layout in /usr/share/X11/xkb/symbols/us containing some modifications to the layout I want. I can select my layout from the keyboard properties, and I can use in on the console following Chascon's post, but the layout I get when typing is unchanged. Apparently, there's a different layout defined somewhere that's overriding what I've set. Where is that layout hiding? This problem occurs in Unity (3D and 2D), but I was able to get the correct layout set in Xfce. In case it's relevant, this problem has occurred since I installed Oneiric fresh on this machine (though I preserved my $HOME). I don't recall whether this problem occurred before the reinstall. Also, in case it's relevant, I also run iBus so I can type Korean. I have a few difficulties with iBus, but I doubt they're related.

    Read the article

  • Musings on the launch of SQL Monitor

    - by Phil Factor
    For several years, I was responsible for the smooth running of a large number of enterprise database servers. We ran a network monitoring tool that was primitive by today’s standards but which performed the useful function of polling every system, including all the Servers in my charge. It ran a configurable script for each service that you needed to monitor that was merely required to return one of a number of integer values. These integer values represented the pain level of the service, from 10 (“hurtin’ real bad”) to 1 (“Things is great”). Not only could you program the visual appearance of each server on the network diagram according to the value of the integer, but you could even opt to run a sound file. Very soon, we had a large TFT Screen, high on the wall of the server room, with every server represented by an icon, and a speaker next to it that would give out a series of grunts, groans, snores, shrieks and funeral marches, depending on the problem. One glance at the display, and you could dive in with iSQL/QA/SSMS and check what was going on with your favourite diagnostic tools. If you saw a server icon burst into flames on the screen or droop like a jelly, you dropped your mug of coffee to do it.  It was real fun, but I remember it more for the huge difference it made to have that real-time visibility into how your servers are performing. The management soon stopped making jokes about the real reason we wanted the TFT screen. (It rendered DVDs beautifully they said; particularly flesh-tints). If you are instantly alerted when things start to go wrong, then there was a good chance you could fix it before being alerted to the problem by the users of the system.  There is a world of difference between this sort of tool, one that gives whoever is ‘on watch’ in the server room the first warning of a potential problem on one of any number of servers, and the breed of tool that attempts to provide some sort of prosthetic DBA Brain. I like to get the early warning, to get the right information to help to diagnose a problem: No auto-fix, but just the information. I prefer to leave the task of ascertaining the exact cause of a problem to my own routines, custom code, intuition and forensic instincts. A simulated aircraft cockpit doesn’t do anything for me, especially before I know where I should be flying.  Time has moved on, and that TFT screen is now, with SQL Monitor, an iPad or any other mobile or static device that can support a browser. Rather than trying to reproduce the conceptual topology of the servers, it lists them in their groups so as to give a display that scales with the increasing number of databases you monitor.  It gives the history of the major events and trends for the servers. It gives the icons and colours that you can spot out of the corner of your eye, but goes on to give you just enough information in drill-down to give you a much clearer idea of where to look with your DBA tools and routines. It doesn't swamp you with information.  Whereas a few server and database-level problems are pretty easily fixed, others depend on judgement and experience to sort out.  Although the idea of an application that automates the bulk of a DBA’s skills is attractive to many, I can’t see it happening soon. SQL Server’s complexity increases faster than the panaceas can be created. In the meantime, I believe that the best way of helping  DBAs  is to make the monitoring process as simple and effective as possible,  and provide the right sort of detail and ‘evidence’ to allow them to decide on the fix. In the end, it is still down to the skill of the DBA.

    Read the article

  • Patterns for a tree of persistent data with multiple storage options?

    - by Robin Winslow
    I have a real-world problem which I'll try to abstract into an illustrative example. So imagine I have data objects in a tree, where parent objects can access children, and children can access parents: // Interfaces interface IParent<TChild> { List<TChild> Children; } interface IChild<TParent> { TParent Parent; } // Classes class Top : IParent<Middle> {} class Middle : IParent<Bottom>, IChild<Top> {} class Bottom : IChild<Middle> {} // Usage var top = new Top(); var middles = top.Children; // List<Middle> foreach (var middle in middles) { var bottoms = middle.Children; // List<Bottom> foreach (var bottom in bottoms) { var middle = bottom.Parent; // Access the parent var top = middle.Parent; // Access the grandparent } } All three data objects have properties that are persisted in two data stores (e.g. a database and a web service), and they need to reflect and synchronise with the stores. Some objects only request from the web service, some only write to it. Data Mapper My favourite pattern for data access is Data Mapper, because it completely separates the data objects themselves from the communication with the data store: class TopMapper { public Top FetchById(int id) { var top = new Top(DataStore.TopDataById(id)); top.Children = MiddleMapper.FetchForTop(Top); return Top; } } class MiddleMapper { public Middle FetchById(int id) { var middle = new Middle(DataStore.MiddleDataById(id)); middle.Parent = TopMapper.FetchForMiddle(middle); middle.Children = BottomMapper.FetchForMiddle(bottom); return middle; } } This way I can have one mapper per data store, and build the object from the mapper I want, and then save it back using the mapper I want. There is a circular reference here, but I guess that's not a problem because most languages can just store memory references to the objects, so there won't actually be infinite data. The problem with this is that every time I want to construct a new Top, Middle or Bottom, it needs to build the entire object tree within that object's Parent or Children property, with all the data store requests and memory usage that that entails. And in real life my tree is much bigger than the one represented here, so that's a problem. Requests in the object In this the objects request their Parents and Children themselves: class Middle { private List<Bottom> _children = null; // cache public List<Bottom> Children { get { _children = _children ?? BottomMapper.FetchForMiddle(this); return _children; } set { BottomMapper.UpdateForMiddle(this, value); _children = value; } } } I think this is an example of the repository pattern. Is that correct? This solution seems neat - the data only gets requested from the data store when you need it, and thereafter it's stored in the object if you want to request it again, avoiding a further request. However, I have two different data sources. There's a database, but there's also a web service, and I need to be able to create an object from the web service and save it back to the database and then request it again from the database and update the web service. This also makes me uneasy because the data objects themselves are no longer ignorant of the data source. We've introduced a new dependency, not to mention a circular dependency, making it harder to test. And the objects now mask their communication with the database. Other solutions Are there any other solutions which could take care of the multiple stores problem but also mean that I don't need to build / request all the data every time?

    Read the article

  • Concurrent Affairs

    - by Tony Davis
    I once wrote an editorial, multi-core mania, on the conundrum of ever-increasing numbers of processor cores, but without the concurrent programming techniques to get anywhere near exploiting their performance potential. I came to the.controversial.conclusion that, while the problem loomed for all procedural languages, it was not a big issue for the vast majority of programmers. Two years later, I still think most programmers don't concern themselves overly with this issue, but I do think that's a bigger problem than I originally implied. Firstly, is the performance boost from writing code that can fully exploit all available cores worth the cost of the additional programming complexity? Right now, with quad-core processors that, at best, can make our programs four times faster, the answer is still no for many applications. But what happens in a few years, as the number of cores grows to 100 or even 1000? At this point, it becomes very hard to ignore the potential gains from exploiting concurrency. Possibly, I was optimistic to assume that, by the time we have 100-core processors, and most applications really needed to exploit them, some technology would be around to allow us to do so with relative ease. The ideal solution would be one that allows programmers to forget about the problem, in much the same way that garbage collection removed the need to worry too much about memory allocation. From all I can find on the topic, though, there is only a remote likelihood that we'll ever have a compiler that takes a program written in a single-threaded style and "auto-magically" converts it into an efficient, correct, multi-threaded program. At the same time, it seems clear that what is currently the most common solution, multi-threaded programming with shared memory, is unsustainable. As soon as a piece of state can be changed by a different thread of execution, the potential number of execution paths through your program grows exponentially with the number of threads. If you have two threads, each executing n instructions, then there are 2^n possible "interleavings" of those instructions. Of course, many of those interleavings will have identical behavior, but several won't. Not only does this make understanding how a program works an order of magnitude harder, but it will also result in irreproducible, non-deterministic, bugs. And of course, the problem will be many times worse when you have a hundred or a thousand threads. So what is the answer? All of the possible alternatives require a change in the way we write programs and, currently, seem to be plagued by performance issues. Software transactional memory (STM) applies the ideas of database transactions, and optimistic concurrency control, to memory. However, working out how to break down your program into sufficiently small transactions, so as to avoid contention issues, isn't easy. Another approach is concurrency with actors, where instead of having threads share memory, each thread runs in complete isolation, and communicates with others by passing messages. It simplifies concurrent programs but still has performance issues, if the threads need to operate on the same large piece of data. There are doubtless other possible solutions that I haven't mentioned, and I would love to know to what extent you, as a developer, are considering the problem of multi-core concurrency, what solution you currently favor, and why. Cheers, Tony.

    Read the article

  • ScriptResource.axd Access is denied. Cross-Domain iFrame

    - by EtienneT
    We have a web page containing an iframe containing a page sharing an authentification cookie with it's parent page. For example the iframe page is on the domain foo.domain.com and the page containing the iframe is on foo2.domain.com. Both share a cookie from domain.com. Authentification works great, but the problem is with ASP.NET in IE7, we always get a javascript error: Access is denied. ScriptResource.axd We are using ASP.NET 3.5, we use Ajax Control Toolkit also (latest version 3.0.30930.0). The problem doesn't occur for IE8. No problem in Firefox and Chrome also. Anyone encountered this problem before?

    Read the article

  • WCF timeouts are a nightmare

    - by Greg
    We have a bunch of WCF services that work almost all of the time, using various bindings, ports, max sizes, etc. The super-frustrating thing about WCF is that when it (rarely) fails, we are powerless to find out why it failed. Sometimes you will get a message that looks like this: System.ServiceModel.CommunicationException: The socket connection was aborted. This could be caused by an error processing your message or a receive timeout being exceeded by the remote host, or an underlying network resource issue. Local socket timeout was '01:00:00'. --- System.IO.IOException: Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host. The problem is that the local socket timeout it's giving you is merely an attempt to be convenient. It may or may not be the cause of the problem. But OK, sometimes networks have issues. No big deal. We can retry or something. But here's the huge problem. On top of failing to tell you which precisely which timeout (if any) resulted in the failure ("your server-side receive timeout was exceeded," or something, would be helpful), WCF seems to have two types of timeouts. Timeout Type #1) A timeout, that, if increased, would increase the chance of your operation's success. So, the pertinent timeout is an hour, you are uploading a huge file that will take an hour and twenty minutes. It fails. You increase the timeout, it succeeds. I have no no problem with this type of timeout. Timeout Type #2) A timeout which merely defines how long you have to wait for the service to actually fail and give you an error, but modifying the value of this timeout has no impact on the chance of success. Basically, something happens during the first second of the service request which mucks things up. It will never recover. WCF doesn't magically retry the network connection for you. Fine, sometimes establishing a network connection doesn't go well. But, if your timeout is 2 hours, you have to wait 2 whole hours with no chance of it ever working before it finally acknowledges that it didn't work and gives you the error. But the error you see in both cases looks the same. With timeout Type #2, it still looks like you are running into a timeout. But, you could increase all of your timeouts to 4 years, and all it would do is make it take 4 years to get an error message. I know that Type #2 exists because I can do an operation that is known to complete in less than a minute when successful, and have it take 2 hours to fail. But, if I kill it and retry, it succeeds quickly. (If you are wondering why there might be a 2 hour timeout on an operation that takes less than a minute, there are times I run the operation with a much larger file and it could take over an hour.) So, to combat the problem with Type #2, you'd want your timeout to be really quick so you immediately know if there is a problem. Then you can retry. But the insurmountable problem is that because I don't know which timeouts are the cause of failure, I don't know what timeouts are Type #1 and which ones are Type #2. There may be one timeout (let's say the client-side send timeout) that acts like Type #1 in some cases and Type #2 in others. I have no idea, and I have no way of finding out. Does anyone know how to track down Type #2 timeouts so I can set them to low values without having to shorten actual (read: Type #1) timeouts and lower the chance of success? Thank you.

    Read the article

  • How to fix the endless printing loop bug in Nevrona Rave

    - by Sean B. Durkin
    Nevrona Designs' Rave Reports is a Report Engine for use by Embarcadero's Delphi IDE. This is what I call the Rave Endless Loop bug. In Rave Reports version 6.5.0 (VCL10) that comes bundled with Delphi 2006, there is a nortorious bug that plagues many Rave report developers. If you have a non-empty dataset, and the data rows for this dataset fit exactly into a page (that is to say there are zero widow rows), then upon PrintPreview, Rave will get stuck in an infinite loop generating pages. This problem has been previously reported in this newsgroup under the following headings: "error: generating infinite pages"; Hugo Hiram 20/9/2006 8:44PM "Rave loop bug. Please help"; Tomas Lazar 11/07/2006 7:35PM "Loop on full page of data?"; Tony Chistiansen 23/12/2004 3:41PM reply to (3) by another complainant; Oliver Piche "Endless lopp print bug"; Richso 9/11/2004 4:44PM In each of these postings, there was no response from Nevrona, and no solution was reported. Possibly, the problem has also been reported on an allied newsgroup (nevrona.public.rave.reports.general), to wit: 6. "Continuously generating report"; Jobard 20/11/2005 Although it is not clear to me if (6) is the Rave Endless loop bug or another problem. This posting did get a reply from Nevrona, but it was more in relation to multiple regions ("There is a problem when using multiple regions that go over a page-break.") than the problem of zero widows.

    Read the article

  • IF adding new Entity gives error me : EntityCommandCompilationException was unhandled bu user code

    - by programmerist
    i have 5 tables in started projects. if i adds new table (Urun enttiy) writing below codes: project.BAL : public static List<Urun> GetUrun() { using (GenoTipSatisEntities genSatisUrunCtx = new GenoTipSatisEntities()) { ObjectQuery<Urun> urun = genSatisUrunCtx.Urun; return urun.ToList(); } } if i receive data form BAL in UI.aspx: using project.BAL; namespace GenoTip.Web.ContentPages.Satis { public partial class SatisUrun : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { FillUrun(); } } void FillUrun() { ddlUrun.DataSource = SatisServices.GetUrun(); ddlUrun.DataValueField = "ID"; ddlUrun.DataTextField = "Ad"; ddlUrun.DataBind(); } } } i added URun later. error appears ToList method: EntityCommandCompilationException was unhandled bu user code error Detail: Error 1 Error 3007: Problem in Mapping Fragments starting at lines 659, 873: Non-Primary-Key column(s) [UrunID] are being mapped in both fragments to different conceptual side properties - data inconsistency is possible because the corresponding conceptual side properties can be independently modified. C:\Users\pc\Desktop\GenoTip.Satis\GenoTip.DAL\ModelSatis.edmx 660 15 GenoTip.DAL Error 2 Error 3012: Problem in Mapping Fragments starting at lines 659, 873: Data loss is possible in FaturaDetay.UrunID. An Entity with Key (PK) will not round-trip when: (PK does NOT play Role 'FaturaDetay' in AssociationSet 'FK_FaturaDetay_Urun' AND PK is in 'FaturaDetay' EntitySet) C:\Users\pc\Desktop\GenoTip.Satis\GenoTip.DAL\ModelSatis.edmx 874 11 GenoTip.DAL Error 3 Error 3012: Problem in Mapping Fragments starting at lines 659, 873: Data loss is possible in FaturaDetay.UrunID. An Entity with Key (PK) will not round-trip when: (PK is in 'FaturaDetay' EntitySet AND PK does NOT play Role 'FaturaDetay' in AssociationSet 'FK_FaturaDetay_Urun' AND Entity.UrunID is not NULL) C:\Users\pc\Desktop\GenoTip.Satis\GenoTip.DAL\ModelSatis.edmx 660 15 GenoTip.DAL Error 4 Error 3007: Problem in Mapping Fragments starting at lines 748, 879: Non-Primary-Key column(s) [UrunID] are being mapped in both fragments to different conceptual side properties - data inconsistency is possible because the corresponding conceptual side properties can be independently modified. C:\Users\pc\Desktop\GenoTip.Satis\GenoTip.DAL\ModelSatis.edmx 749 15 GenoTip.DAL Error 5 Error 3012: Problem in Mapping Fragments starting at lines 748, 879: Data loss is possible in Satis.UrunID. An Entity with Key (PK) will not round-trip when: (PK does NOT play Role 'Satis' in AssociationSet 'FK_Satis_Urun' AND PK is in 'Satis' EntitySet) C:\Users\pc\Desktop\GenoTip.Satis\GenoTip.DAL\ModelSatis.edmx 880 11 GenoTip.DAL Error 6 Error 3012: Problem in Mapping Fragments starting at lines 748, 879: Data loss is possible in Satis.UrunID. An Entity with Key (PK) will not round-trip when: (PK is in 'Satis' EntitySet AND PK does NOT play Role 'Satis' in AssociationSet 'FK_Satis_Urun' AND Entity.UrunID is not NULL) C:\Users\pc\Desktop\GenoTip.Satis\GenoTip.DAL\ModelSatis.edmx 749 15 GenoTip.DAL

    Read the article

  • WCF chunking/streaming - make it transparent for client

    - by bybor
    While developing WCF service i've faced problem of transferring large data as method params ( 4 Mb of raw size, not considering transfer/message overhead). The solution for this problem is to use chunking or streaming, but all the samples i've seen assume client is aware of used method and uses available block size for sending/receiving portions of data, and the problem (for me) is that it's not possible to call just one method, like SaveData(DataInformation info) but write wrapper method which will instead iterate smth like SaveDataChunk(byte[] buffer) Could it be somehow made transparent for client, just calling 'SaveData'?

    Read the article

  • Errors with shotgun gem and msvcrt-ruby18.dll when running my Sinatra app

    - by Adam Siddhi
    Greetings, Every time I make a change to a Sinatra app I'm working on and try to refresh the browser (located at http://localhost:4567/) the browser will refresh and, the console window seems to restart the WEB brick server. The problem is that the content in the browser window does not change. A friend of mine told me it was a shotgun issue and referred me to rtomayko's shotgun gem: http://github.com/rtomayko/shotgun On this page I read that the shotgun gem would basically solve my problem, allowing the changes made to my app to show up in the browser window after I refresh it. So I installed the shotgun gem. The installation was successful. To activate the shotgun function you have to type shotgun before the file name. In this case my Sinatra app's file name is shortener.rb When I type shotgun shortener.rb to run my Sinatra app I get this error: C:\ruby\sinatrashotgun shortener.rb c:/Ruby19/lib/ruby/gems/1.9.1/gems/shotgun-0.6/bin/shotgun:137:in `': No such f ile or directory - uname (Errno::ENOENT) from c:/Ruby19/lib/ruby/gems/1.9.1/gems/shotgun-0.6/bin/shotgun:137:in block in ' from c:/Ruby19/lib/ruby/gems/1.9.1/gems/shotgun-0.6/bin/shotgun:136:in each' from c:/Ruby19/lib/ruby/gems/1.9.1/gems/shotgun-0.6/bin/shotgun:136:in find' from c:/Ruby19/lib/ruby/gems/1.9.1/gems/shotgun-0.6/bin/shotgun:136:in <top (required)>' from c:/Ruby19/bin/shotgun:19:inload' from c:/Ruby19/bin/shotgun:19:in `' I should also mention that before testing the shotgun method out to see if it worked, I installed the mongrel (I realize I should have checked to see if shotgun worked before doing this as installing mongrel has complicated this problem). So on top of getting the error message above I also get a pop up window from Ruby.exe saying: Ruby.exe - Unable to load component This application has failed to start because msvcrt-ruby18.dll was not found. Re-installing the application may fix this problem. I have no idea what msvcrt-ruby18.dll is but I know that installing either shotgun and/or mongrel created this problem. Where to go from here? Thanks, Adam

    Read the article

  • Biztalk Ordered Delivery direct bound to multiple ports

    - by WtFudgE
    Hi, another ordered delivery problem. We have an orchestration which is bound to a send port which has ordered delivery true. Another send port also picks up these messages through filtering, this port also has ordered delivery. Now for some reason when there are multiple ports using the message and one of these is directly port binded only one of the ports is being used. I mean that not both ports give an output. If i unenlist one of the ports it's always outputted, this works in both ways. We used to have this with 2 ports which both used filters instead, this worked but we had to change one to a direct port, the problem occured since then. Also the choice of ports for BizTalk is pretty random, because on our server it for example chooses port A and when I recreate the same problem on my local machine it for example choses port B. It's kind of a weird problem and we have no idea what could be the cause.

    Read the article

  • Sparse parameter selection using Genetic Algorithm

    - by bgbg
    Hello, I'm facing a parameter selection problem, which I would like to solve using Genetic Algorithm (GA). I'm supposed to select not more than 4 parameters out of 3000 possible ones. Using the binary chromosome representation seems like a natural choice. The evaluation function punishes too many "selected" attributes and if the number of attributes is acceptable, it then evaluates the selection. The problem is that in these sparse conditions the GA can hardly improve the population. Neither the average fitness cost, nor the fitness of the "worst" individual improves over the generations. All I see is slight (even tiny) improvement in the score of the best individual, which, I suppose, is a result of random sampling. Encoding the problem using indices of the parameters doesn't work either. This is most probably, due to the fact that the chromosomes are directional, while the selection problem isn't (i.e. chromosomes [1, 2, 3, 4]; [4, 3, 2, 1]; [3, 2, 4, 1] etc. are identical) What problem representation would you suggest? P.S If this matters, I use PyEvolve.

    Read the article

  • qemu-kvm virtual machine virtio network freeze under load

    - by Rick Koshi
    I'm having a problem with my virtual machines, where the network will freeze under heavy load. I'm using CentOS 6.2 as both host and guest, not using libvirt, just running qemu-kvm directly as follows: /usr/libexec/qemu-kvm \ -drive file=/data2/vm/rb-dev2-www1-vm.img,index=0,media=disk,cache=none,if=virtio \ -boot order=c \ -m 2G \ -smp cores=1,threads=2 \ -vga std \ -name rb-dev2-www1-vm \ -vnc :84,password \ -net nic,vlan=0,macaddr=52:54:20:00:00:54,model=virtio \ -net tap,vlan=0,ifname=tap84,script=/etc/qemu-ifup \ -monitor unix:/var/run/vm/rb-dev2-www1-vm.mon,server,nowait \ -rtc base=utc \ -device piix3-usb-uhci \ -device usb-tablet /etc/qemu-ifup (used by the above command) is a very simple script, containing the following: #!/bin/sh sudo /sbin/ifconfig $1 0.0.0.0 promisc up sudo /usr/sbin/brctl addif br0 $1 sleep 2 And here's the info on br0 and other interfaces: avl-host3 14# brctl show bridge name bridge id STP enabled interfaces br0 8000.180373f5521a no bond0 tap84 virbr0 8000.525400858961 yes virbr0-nic avl-host3 15# ip addr show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: em1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP qlen 1000 link/ether 18:03:73:f5:52:1a brd ff:ff:ff:ff:ff:ff 3: em2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP qlen 1000 link/ether 18:03:73:f5:52:1a brd ff:ff:ff:ff:ff:ff 4: em3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000 link/ether 18:03:73:f5:52:1e brd ff:ff:ff:ff:ff:ff 5: em4: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000 link/ether 18:03:73:f5:52:20 brd ff:ff:ff:ff:ff:ff 6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP link/ether 18:03:73:f5:52:1a brd ff:ff:ff:ff:ff:ff inet6 fe80::1a03:73ff:fef5:521a/64 scope link valid_lft forever preferred_lft forever 7: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 18:03:73:f5:52:1a brd ff:ff:ff:ff:ff:ff inet 172.16.1.46/24 brd 172.16.1.255 scope global br0 inet6 fe80::1a03:73ff:fef5:521a/64 scope link valid_lft forever preferred_lft forever 8: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 52:54:00:85:89:61 brd ff:ff:ff:ff:ff:ff inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 9: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 500 link/ether 52:54:00:85:89:61 brd ff:ff:ff:ff:ff:ff 12: tap84: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500 link/ether ba:e8:9b:2a:ff:48 brd ff:ff:ff:ff:ff:ff inet6 fe80::b8e8:9bff:fe2a:ff48/64 scope link valid_lft forever preferred_lft forever bond0 is a bond of em1 and em2. virbr0 and virbr0-nic are vestigial interfaces left over from CentOS's default installation. They are unused (as far as I know). The guest runs perfectly until I run a large 'rsync', when the network will freeze after some seemingly-random time (usually under a minute). When it freezes, there is no network activity in or out of the guest. I can still connect to the guest's console via vnc, but it is unable to speak out its network interface. Any attempt to 'ping' from the guest gives a "Destination Host Unreachable" error for 3/4 packets and no reply for every fourth packet. Sometimes (perhaps two thirds of the time), I can bring the interface back to life by doing a "service network restart" from the guest's console. If this works (and if I do it before the rsync times out), the rsync will resume. Usually it will freeze again within a minute or two. If I repeat, the rsync will eventually finish, and I presume the machine goes back to waiting for another period of heavy load. Throughout the whole process, there are no console errors or relevant (that I can see) syslog messages on either guest or host machine. If the "service network restart" doesn't work the first time, trying again (and again and again) never seems to work. The command completes normally, with normal output, but the interface stays frozen. However, a soft reboot of the guest machine (without restarting qemu-kvm) always seems to bring it back. I am aware of the "lowest mac address" assignment problem, where the bridge takes on the mac address of the slave interface with the lowest mac address. This causes temporary network freezes, but is definitely not what's happening for me. My freezes are permanent until manual intervention, and you can see from the 'ip addr show' output above that the mac address being used by br0 is that of the physical ethernet. There are no other virtual machines running on the host. I've verified that each virtual machine on the subnet has its own unique mac address. I have rebuilt the guest machine several times, and I have tried this on three different host machines (identical hardware, built identically). Oddly, I do have one virtual host (the second of this series) which never seemed to have a problem. It never had its network freeze when it was running the same rsync during its build. It's particularly odd because it was the second build. The first, on a different host, did have the freezing problem, but the second did not. I assumed at the time that I had done something wrong with the first build, and that the problem was resolved. Unfortunately, the problem reappeared when I built the third VM. Also unfortunately, I can't do many tests with the working VM, as it's now in production use, and I'm hoping I can find the cause of this issue before that machine starts having problems. It's possible that I just got really lucky while running the rsync on the working machine, and that one time it didn't freeze. Of course it's possible that I somehow changed the build scripts without realizing it and re-broke something, but I can't find any such thing. In any case, I'm hoping someone has some idea what could cause this. Addendum: Preliminary tests suggest that I don't have the problem if I substitute e1000 for virtio in the first -net flag to qemu-kvm. I don't consider this a solution, but it is suitable for a stopgap. Has anyone else had (or better yet, solved) this problem with the virtio network driver?

    Read the article

< Previous Page | 350 351 352 353 354 355 356 357 358 359 360 361  | Next Page >