Search Results

Search found 15591 results on 624 pages for 'problems'.

Page 107/624 | < Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >

  • Why is my HDD not showing up?

    - by Mike
    I'm on Windows 7, got a new 2 TiB NON-external hard disk. When I plug it in it doesn't show up on the Disk Management list in the Computer Management window (that you get by right-clicking My Computer and selecting "Manage"). At first I thought it was broken, but I could see it using a friend's computer. I've run a full disk scan using the official tool from Western Digital, no problems. I've formatted it and partitioned it at another friends place, I've even encrypted the disk fully without any problems, then mounted it and placed a text file on it with the word "hello" inside and saved the file. When I boot my own computer the disk shows up in BIOS. So the disk is not broken. I've tried changing what SATA slot on the motherboard I stick it in. Makes no difference. After all this: Why won't my Windows 7 discover the disk???

    Read the article

  • Managed Service Architectures Part I

    - by barryoreilly
    Instead of thinking about service oriented architecture, a concept that is continually defined, redefined, abused and mistreated, perhaps it is time to drop the acronym and consider what we actually need to get the job done.   ‘Pure’ SOA involves the modeling of an organisation’s processes, the so called ‘Top Down’ approach, followed by the implementation of these processes as services.     Another approach, more commonly seen in the wild, is the bottom up approach. This usually involves services that simply start popping up in the organization, and SOA in this case is often just an attempt to rein in these services. Such projects, although described as SOA projects for a variety of reasons, have clearly little relation to process driven architecture. Much has been written about these two approaches, with many deciding that a hybrid of both methods is needed to succeed with SOA.   These hybrid methods are a sensible compromise, but one gets the feeling that there is too much focus on ‘Succeeding with SOA’. Organisations who focus too much on bottom up development, or who waste too much time and money on top down approaches that don’t produce results, are often recommended to attempt an ‘agile’(Erl) or ‘middle-out’ (Microsoft) approach in order to succeed with SOA.  The problem with recommending this approach is that, in most cases, succeeding with SOA isn’t the aim of the project. If a project is started with the simple aim of ‘Succeeding with SOA’ then the reasons for the projects existence probably need to be questioned.   There are a number of things we can be sure of: ·         An organisation will have a number of disparate IT systems ·         Some of these systems will have redundant data and functionality ·         Integration will give considerable ROI ·         Integration will already be under way. ·         Services will already exist in the organisation ·         These services will be inconsistent in their implementation and in their governance   So there are three goals here: 1.       Alignment between the business and IT 2.     Integration of disparate systems 3.     Management of services.   2 and 3 are going to happen,  in fact they must happen if any degree of return is expected from the IT department. Ignoring 1 is considered a typical mistake in SOA implementations, as it ignores the business implications. However, the business implication of this approach is the money saved in more efficient IT processes. 2 and 3 are ongoing, and they will continue happening, even if a large project to produce a SOA metamodel is started. The result will then be an unstructured cackle of services, and a metamodel that is already going out of date. So we get stuck in and rebuild our services so that they match the metamodel, with the far reaching consequences that this will have on all our LOB systems are current. Lets imagine that this actually works ( how often do we rip and replace working software because it doesn't fit a certain pattern? Never -that's the point of integration), we will now be working with a metamodel that is out of date, and most likely incomplete if the organisation is large.      Accepting that an object can have more than one model over time, with perhaps more than one model being  at any given time will help us realise the limitations of the top down model. It is entirely normal , and perhaps necessary, for an organisation to be able to view an entity from different perspectives.   So, instead of trying to constantly force these goals in a straight line, why not let them happen in parallel, and manage the changes in each layer.     If  company A has chosen to model their business processes and create a business architecture, there will be a reason behind this. Often the aim is to make the business more flexible and able to cope with change, through alignment between the business and the IT department.   If company B’s IT department recognizes the problem of wild services springing up everywhere, and decides to do something about it, by designing a platform and processes for the introduction of services, is this not a valid approach?   With the hybrid approach, it is recommended that company A begin deploying services as quickly as possible. Based on models that are clearly incomplete, and which will therefore change rapidly and often in the near future. Natural business evolution will also mean that the models can be guaranteed to change in the not so near future. To ‘Succeed with SOA’ Company B needs to go back to the drawing board and start modeling processes and objects. So, in effect, we are telling business analysts to start developing code based on a model they are unsure of, and telling programmers to ignore the obvious and growing problems in their IT department and start drawing lines and boxes.     Could the problem be that there are two different problem domains? And the whole concept of SOA as it being described by clever salespeople today creates an example of oft dreaded ‘tight coupling’ between these two domains?   Could it be that we have taken two large problem areas, and bundled the solution together in order to create a magic bullet? And then convinced ourselves that the bullet actually exists?   Company A wants to have a closer relationship between the business and its IT department, in order to become a more flexible organization. Company B wants to decrease the maintenance costs of its IT infrastructure. If both companies focus on succeeding with SOA, then they aren’t focusing on their actual goals.   If Company A starts building services from incomplete models, without a gameplan, they will end up in the same situation as company B, with wild services. If company B focuses on modeling, they could easily end up with the same problems as company A.   Now we have two companies, who a short while ago had one problem each, that now have two problems each. This has happened because of a focus on ‘Succeeding with SOA’, rather than solving the problem at hand.   This is not to suggest that the two problem domains are unrelated, a strategy that encompasses both will obviously be good for the organization. But only if the organization realizes this and can develop such a strategy. This strategy cannot be bought in a box.       Anyone who has worked with SOA for a while will be used to analyzing the solutions to a problem and judging the solution’s level of coupling. If we have two applications that each perform separate functions, but need to communicate with each other, we create a integration layer between them, perhaps with a service, but we do all we can to reduce the dependency between the two systems. Using the same approach, we can separate the modeling (business architecture) and the service hosting (technical architecture).     The business architecture describes the processes and business objects in the business domain.   The technical architecture describes the hosting and management and implementation of services.   The glue that binds these together, the integration layer in our analogy, is the service contract, where the operations map the processes to their technical implementation, and the messages map business concepts to software objects in the implementation.   If we reduce the coupling between these layers, we should be able to allow developers to develop services, and business analysts to develop models, without the changes rippling through from one side to the other.   This would allow company A to carry on modeling, and company B to develop a service platform, each achieving their intended goal, without necessarily creating the problems seen in pure top down or bottom up approaches. Company B could then at a later date map their service infrastructure to a unified model, and company A could carry on modeling, insulating deployed services from changes in the ongoing modeling.   How do we do this?  The concept of service virtualization has been around for a while, and is instantly realizable in Microsoft’s Managed Services Engine. Here we can create a layer of virtual services, which represent the business analyst’s view, presenting uniform contracts to the outside world. These services can then transform and route messages to the actual service implementations. I like to think of the virtual services with their beautifully modeled interfaces as ‘SOA services’, and the implementations as simple integration ‘adapter’ services providing an interface to a technical implementation. The Managed Services Engine also provides policy based control over services, regardless of where they are deployed, simplifying handling of security, logging, exception handling etc.   This solves a big problem. The pressure to deliver services quickly is always there in projects. It is very important to quickly show value when implementing service architectures. There is also pressure to deliver quality, and you can’t easily do both at the same time. This approach allows quick delivery with quality increasing over time, allowing modeling and service development to occur in parallel and independent of each other. The link between business modeling and service implementation is not one that is obvious to many organizations, and requires a certain maturity to realize and drive forward. It is also completely possible that a company can benefit from one without the other, even if this approach is frowned upon today, there are many companies doing so and seeing ROI.   Of course there are disadvantages to this. The biggest one being the transformations necessary between the virtual interfaces and the service implementations. Bad choices in developing the services in the service implementation could mean that it is impossible to map the modeled processes to the implementation with redevelopment of the service. In many cases the architect will not have a choice here anyway, as proprietary systems are often delivered with predeveloped services. The alternative is to wait until the model is finished and then build the service according the model. However, if that approach worked we wouldn’t be having this discussion! And even when it does work, natural business evolution will mean that the two concepts (model and implementation) will immediately start to drift away from each other, so coupling them tightly together so that they are forever bound to the model that only applies at the time of the modeling work will not really achieve a great deal. Architecture is all about trade offs, and here a choice has to be made. The choice is between something will initially be of low quality but will work, or something that may well be impossible to achieve in most situations.         In conclusion, top-down is a natural approach for business analysts, and bottom-up  is a natural approach for developers. Instead of trying to force something on both that neither want, and which has not shown itself to be successful,  why not let them get on with their jobs, and let an enterprise architect coordinate the processes?

    Read the article

  • HDMI video output not working for external monitor

    - by user291852
    I have installed from scratch ubuntu GNOME 14.04 with gnome-flashback on my old HP HDX 16 laptop (core 2 duo p8600 + nvidia GT9600M + 4GB of ram) and I have problems with the HDMI output (I use it to extend my desktop on a dell U2412M 1920x1200 monitor). In the following I summarize the configurations that I have tried: Using the open source nouveau drivers, only the laptop monitor works, no signal from the external monitor connected to the HDMI output. However, the output of the xrandr command show that the HDMI output is connected with the correct resolution 1920x1200 (I find this thing really weird). Nouveau drivers with VGA connection works without problems on the external monitor, but the image is blurry compared to the HDMI connection. Using the nvidia drivers (I have tried different versions: nvidia-331-updates and the xorg-edgers versions nvidia-334 and nvidia-337) the HDMI output works, but I have system instabilities, random crashes and display freezes. I can't even enter in terminal mode with ctrl-alt-f1, so I have to manually shut down the laptop with the power button. I really would like to use the HDMI output with the nouveau drivers to avoid the system instabilities that I experienced with the nvidia drivers, but I can't figure out how to make it works. Alessandro

    Read the article

  • Weird IIS with Windows Authentication + IE problem

    - by Paulius Maruška
    Hello. I have a website running on IIS and using Windows Authentication. All users that are configured to get access to the site are form a AD domain (not local users). In the properties of a Website, I have set to use the AD domain as the realm. Now, when using Firefox, Safari or Chrome - Everything is fine. When the user tries to open the site, he get's the login box. he enters simply "username" and "password" (let's pretend that it's an actual login and password :P) and he get's into the site. When using IE, however, things get nasty. When the user tries to open the site - he get's the login box. User enters the "username" and "password" again, but those get rejected! And when the second time login box pops up - it has the username filled in as "web-server-domain-name\username" which is wrong, because web-server-domain-name is not the domain where all users reside (it's "ad-domain"). I've spent days trying to figure out what's going on... Note, that if I manually enter "ad-domain\username" - I get accepted into the site without problems. So, my guess is that IE sends wrong username if domain is not specified. Anyway, IE is the only browser that triggers this behavior! Is it possible to do a server-side fix? Maybe it's possible to somehow auto-map the users to AD users? If it's not solvable server-side - is there a client-side fix for this? Thank you. PS: I'm more of a programmer than a sys-admin, so configuring servers isn't the strong side of mine... :P UPDATE: @Evan: Yes, "Digest authentication for Windows domain servers" is also enabled. @Eric: IIS version is 6.0. The authentication methods enabled are: Integrated and digest - all other methods are disabled. As for the security log. I looked at it, when doing "username" and "password" login in Chrome/Firefox and when doing "ad-domain\username" and "password" login from IE - the generated log messages are the same (I see no difference, anyway). When entering "username" and "password" I don't see any errors in the security (or any other) log, so can't tell what method it's trying to use. UPDATE 2: As suggested by Eric in the comments - I played around with Fiddler... While playing with it, I noticed, that when "username" and "password" is entered in FF and IE - the "Authorization" header value (encrypted) sent by IE is longer (almost two times) than one sent by FF. I tried to disable Windows Integrated authentication and only leave the Digest enabled - that fixed the problem (meaning, IE used the right realm just like other browsers), but that caused bazillion other problems with my site, because with Digest - user impersonation on the server doesn't work (that causes problems, when connecting to database etc). Any ideas?

    Read the article

  • syslog-ng fails to log on lxc host

    - by christian
    we are running CentOS 6 servers with multiple lxc-containers. For system logging we are using syslog-ng. After a while the syslog-ng daemon stops logging messages, but the daemon keeps running. This happens on the host and inside the containers (where another syslog-ng is running) as well. We could not find any patterns for the failure yet but we assume that it has something to do with lxc, because we don't have these problems on other hosts. We have the suspicion that these problems occur when more than on lxc-container is running and that only "new" processes can not log. We are running the following software versions: CentOS-Linux 6.4/6.5 lxc-0.7.5 syslog-ng-3.2.5 Do you have any ideas? Best regards trademesh

    Read the article

  • Strange DNS problem [seems to be IPv6 issue]

    - by Homer J. Simpson
    Hi, I'm experiencing strange problems with my Kubuntu 9.10 when doing DNS requests from various applications. The requests are extremely slow, so loading any pages in Firefox or Konqueror, doing package installations in Kpackagemanager and other apps is really painful, while for example Opera doesnt have any problems, and ping is normally fast as well for DNS pings. I checked the proxy settings of both the used applications as well as of the general system and there are none, so to me it doesn't seem as there was something inbetween.. Does anybody have an idea on what to check for possible problem sources or how to solve this ? I'm behind a DSL home router which does the DHCP (and works well with my other computer). Any kind of advice would be really helpful. Edit: It seems to be some kind of IPv6 problem, as I could get it to work by disabling IPv6 explicitly in Firefox. Is there a general solution to this ?

    Read the article

  • Compiling to a binary from source

    - by Chords
    I'm using WKHTMLTOPDF on a 64-bit Linux server and I'm running into problems with the version. Seen here: http://code.google.com/p/wkhtmltopdf/downloads/list There's slim pickins when it comes to pre-compiled binaries. I started with version 0.9.9 which has a few bugs. I upgraded to 0.11.0 RC 1 to find a slew of new problems, namely the following: http://code.google.com/p/wkhtmltopdf/issues/detail?id=730 I think 0.10 RC 2 would work, and the thread above suggests compiling from the source has a fix for the error I'm getting, but I don't know how to do that. Can anyone explain how I can create a static binary myself, or would anyone be willing to create and post one for the countless people waiting for this fix?

    Read the article

  • How to keep your third party libraries up to date?

    - by Joonas Pulakka
    Let's say that I have a project that depends on 10 libraries, and within my project's trunk I'm free to use any versions of those libraries. So I start with the most recent versions. Then, each of those libraries gets an update once a month (on average). Now, keeping my trunk completely up to date would require updating a library reference every three days. This is obviously too much. Even though usually version 1.2.3 is a drop-in replacement for version 1.2.2, you never know without testing. Unit tests aren't enough; if it's a DB / file engine, you have to ensure that it works properly with files that were created with older versions, and maybe vice versa. If it has something to do with GUI, you have to visually inspect everything. And so on. How do you handle this? Some possible approaches: If it ain't broke, don't fix it. Stay with your current version of the library as long as you don't notice anything wrong with it when used in your application, no matter how often the library vendor publishes updates. Small incremental changes are just waste. Update frequently in order to keep change small. Since you'll have to update some day in any case, it's better to update often so that you notice any problems early when they're easy to fix, instead of jumping over several versions and letting potential problems to accumulate. Something in between. Is there a sweet spot?

    Read the article

  • How can I solve Windows PPTP VPN issues?

    - by Robin M
    I'm having persistent problems with Windows PPTP VPN connections. The VPN appears up whilst the tunnel won't transfer traffic (ping to a remote IP within the VPN works for a while, and then fails). The client receives routing information via DHCP. When the connection fails, the routing table is still correct so I don't think it's a routing problem. My internet connection is via an ADSL2 line. There's software to deal with PPTP problems, like TunnelRat, but I don't want to install v1.1 of the .NET framework and I'd rather get to the bottom of the problem (I have multiple VPN connections and some are more unreliable than others). What can I do to get to the bottom of this? Alternatively, what can I do to keep the connection alive?

    Read the article

  • How to install rgdal on Ubuntu 12.10?

    - by radek
    I'm strugling to install rgdal library on Ubuntu 12.10. Installation from within R results in error: Error: gdal-config not found The gdal-config script distributed with GDAL could not be found. If you have not installed the GDAL libraries, you can download the source from http://www.gdal.org/ If you have installed the GDAL libraries, then make sure that gdal-config is in your path. Try typing gdal-config at a shell prompt and see if it runs. If not, use: --configure-args='--with-gdal-config=/usr/local/bin/gdal-config' with appropriate values for your installation. ERROR: configuration failed for package ‘rgdal’ * removing ‘/home/rdk/R/x86_64-pc-linux-gnu-library/2.15/rgdal’ Warning in install.packages : installation of package ‘rgdal’ had non-zero exit status R-sig-Geo, this two SE questions and other websites pointed me to the requirements of libgdal1-dev. But when I tried sudo apt-get install libgdal1-dev I end up with another error message Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: libgdal1-dev : Depends: libgdal-dev but it is not going to be installed E: Unable to correct problems, you have held broken packages. Again - when I try to install libgdal-dev another dependencies error shows up The following packages have unmet dependencies: libgdal-dev : Depends: libgeos-dev but it is not going to be installed Depends: libspatialite-dev but it is not going to be installed Again trying libgeos-dev gives message: Depends: libgeos-c1 (= 3.3.3-1.1) but 3.3.3-2~precise2 is to be installed E: Unable to correct problems, you have held broken packages. and libspatialite-dev: Depends: libspatialite3 (= 3.1.0~rc2-1ubuntu1) but 3.1.0~rc2-2~precise1 is to be installed Is there any way to tame those dependencies and have rgdal running in Ubuntu? My sessionInfo() R version 2.15.1 (2012-06-22) Platform: x86_64-pc-linux-gnu (64-bit)

    Read the article

  • Win 2k3 server's network problem.

    - by Sam
    I'm running 4 of Win2k3 64bit servers in the same subnet. It's been more than an year that I've running them without a problem. Recently, I kept losing the connection to one of the server. Let's say it's 'server A' which has a problem. Losing the connection means that I can't access to server A from the other servers. I've checked if server A has any internet connection problems or are there any abnomal event logs in the eventvwr - but haven't found any problems. The problem usually resolved if I restart the server again. But as time goes by, it keeps happen again and again. I can't afford to restart the server every time, and I really want to find out the reason. Can anyone help me out? Let me know if you guys need any of more information.

    Read the article

  • Problem with sound driver

    - by JiminP
    I had problem that sound had lag in Flash. (Other than that, there was no problem.) On the internet, I found that installing OSS4 might help me. So I installed OSS4, but there was some problems (no sound on Flash, and couldn't use function key on the laptop - which is quite annoying), so I try to remove OSS4 and re-installing sound modules. After some mess-up, the whole sound was gone. I used Ubuntu for a year, but I don't know how to use the terminal well (All I know is basic commands like sudo, ls, or apt-get).. Now I'm trying to recover by following instructions at this page, but I have some problems... :( When I try to follow instructions: sudo aplay -l finds no sound drivers. find /lib/modules/'uname -r' | grep snd (backtick changed to ' due to code markup) returns nothing. When I try to do sudo apt-get install linux-restricted-modules-'uname -r' linux-generic, it says that it can't find linux-restricted-modules-3.0.0-13-generic package. lspci -v | grep -A7 -i "audio" returns this, which doesn't contain anything about the name of the driver. Writing sudo modprobe sn and pressing tab twice only returns sudo modprobe sn9c102. sudo aptitude --purge reinstall linux-sound-base alsa-base alsa-utils linux-image-'uname -r' linux-ubuntu-modules-'uname -r' libasound2 returns this, and didn't change non of above. sudo apt-get install linux-alsa-driver-modules-$(uname -r) fails because it can't find the package linux-alsa-driver-modules-3.0.0-13-generic. Compiling ALSA driver doesn't work. When I try to make, it says that /lib/modules/3.0.0-13-generic/build/include/linux/modversions.h doesn't exist. I'm using Ubuntu 11.10. Can anyone help me? I can re-install Ubuntu, but I don't want to....

    Read the article

  • How can artificially create a slow query in mysql?

    - by Gray Race
    I'm giving a hands on presentation in a couple weeks. Part of this demo is for basic mysql trouble shooting including use of the slow query log. I've generated a database and installed our app but its a clean database and therefore difficult to generate enough problems. I've tried the following to get queries in the slow query log: Set slow query time to 1 second. Deleted multiple indexes. Stressed the system: stress --cpu 100 --io 100 --vm 2 --vm-bytes 128M --timeout 1m Scripted some basic webpage calls using wget. None of this has generated slow queries. Is there another way of artificially stressing the database to generate problems? I don't have enough skills to write a complex Jmeter or other load generator. I'm hoping perhaps for something built into mysql or another linux trick beyond stress.

    Read the article

  • XKB - remap arrow keys and preserve shift behaviour to select text

    - by dgirardi
    I realize arrow key remapping is an old problem, however I cannot seem to find a good solution that lets me select text with SHIFT + remapped keys as I would do with the vanilla arrow keys. For instance, if I remap Caps Lock to ISO_Level3_Shift and set xkb_symbols to read either key <AC08> { [ k, K , Down, Down] }; or key <AC08> { type="THREE_LEVEL", [ k, K , Down ] }; Pressing Shift+CapsLock+K will behave exactly as CapsLock+K (while Shift+Down behaves differently from Down alone). I had somewhat more success using higher level macro utilities and generating keyboard events (i.e. generate both the shift and the arrow keypresses); hoever that approach has a whole set of different problems - often the UI response to a simulated keypress is different from the "real" keypress, and there are performance problems as well - I can type faster than the thing can handle. Tl;dr; how can you shift-select using remapped arrow keys under X?

    Read the article

  • Windows 7 Locking up Randomly

    - by Michael Moore
    I've got a Windows 7 machine that is locking up randomly. It can be in the first thirty seconds, or it can be hours later. There is nothing specific I can find that is running when it happens. When it locks, the screen doesn't change, but nothing moves. The waiting icon stops, the mouse stops, keyboard doesn't work, etc. I've even tried the crash on ctrl-scrl registry hack, and it won't even dump the kernel. I've run hardware diagnostics on the RAM and it doesn't find any problems. I would think it is a hardware issue, but on this exact same machine, I can run 64 Bit Ubuntu and it has zero problems. I've even tried reinstalling Windows7 from scratch, and it still happens. Anyone have any ideas? Any good diagnostic tools to recommend? Thanks! Michael

    Read the article

  • Cannot import video from a DV camcorder over FireWire

    - by qbeuek
    I have a JVC GR-D320 miniDV camcorder that has a FireWire interface. I recently upgraded to Windows 7 RTM (64 bit, fresh installation). When I connect my camcorder through FireWire, I can see it in Device Manager without any warnings or problems, but I cannot capture videos from my miniDV tapes. After connecting, AutoPlay displays "Import Video could not find a compatible digital video device. Verify that the digital video device is properly connected and turned on." When using Windows Live Photo Gallery after selecting the import option, my camera is not listed. The camera used to work perfectly on the same hardware before upgrading to Windows 7 RTM 64 bit (it used to work fine on Windows XP SP3 32 bit). Googleing revealed that people had the exact same problems in Vista, but no solution was provided. Any help?

    Read the article

  • How to simulate pressure with particles?

    - by BeachRunnerJoe
    I'm trying to simulate pressure with a collection of spherical particles in a Unity game I'm building. A couple notes about the problem: The goal is to fill a constantly changing 2d space/void with small, frictionless spheres. The game is trying to simulate the ever-growing pressure of more objects being shoved into this space. The level itself is constantly scrolling from left to right, meaning if the space's dimensions are not changed by the user it will automatically get smaller (the leftmost part of the space will continually scroll off-screen). I'm wondering what some approaches are that I can take to tackling these problems... Knowing when to detect when there is space to fill and then add spheres to the space. Removing spheres from the space when it is shrinking. Strategies to simulate pressure on the spheres such that they "explode outwards" when more space is created. The current approach I am contemplating is using a constantly moving wall, that is off screen and moves with the screen, as this image illustrates: . This moving wall will push and trap the spheres into the space. As for adding new spheres, I was going to have either (1) spheres replicate themselves upon detecting free space, OR (2) spawn them at the left side of the space (where the wall is) - pushing the rest of the spheres to fill the space. I foresee problems with idea #1 because this likely wouldn't really create/simulate pressure; idea #2 seems more promising, but raises the question of how to provide a location for these new sphere particles to spawn (and the ramifications of spawning them when there IS no space). Thanks so much in advance for your wisdom!

    Read the article

  • Processing component pools problem - Entity Subsystem

    - by mani3xis
    Architecture description I'm creating (designing) an entity system and I ran into many problems. I'm trying to keep it Data-Oriented and efficient as much as possible. My components are POD structures (array of bytes to be precise) allocated in homogeneous pools. Each pool has a ComponentDescriptor - it just contains component name, field types and field names. Entity is just a pointer to array of components (where address acts like an entity ID). EntityPrototype contains entity name and array of component names. Finally Subsystem (System or Processor) which works on component pools. Actual problem The problem is that some components dependents on others (Model, Sprite, PhysicalBody, Animation depends on Transform component) which makes a lot of problems when it comes to processing them. For example, lets define some entities using [S]prite, [P]hysicalBody and [H]ealth: Tank: Transform, Sprite, PhysicalBody BgTree: Transform, Sprite House: Transform, Sprite, Health and create 4 Tanks, 5 BgTrees and 2 Houses and my pools will look like: TTTTTTTTTTT // Transform pool SSSSSSSSSSS // Sprite pool PPPP // PhysicalBody pool HH // Health component There is no way to process them using indices. I spend 3 days working on it and I still don't have any ideas. In previous designs TransformComponent was bound to the entity - but it wasn't a good idea. Can you give me some advices how to process them? Or maybe I should change the overall design? Maybe I should create pools of entites (pools of component pools) - but I guess it will be a nightmare for CPU caches. Thanks

    Read the article

  • Fabric and cygwin don't work with windows UNC paths

    - by tcoopman
    I have some strange problems with fabric deployment to Windows Server 2008r2. The thing I try to accomplish is to copy some files to a shared folder with a fabric script (this script does a lot of other things too, but only this step gives me problems). This is the problem: When I try to access a UNC(Universal Naming convention) path I always get access denied kind of answers if I run the script in fabric. When I run the command in an ssh prompt (same user) it works fine. Examples: cmd: robocopy f:/.... //share result: in ssh this works fine, in fabric I get "Logon failure: the user has not been granted the requested logon type aat this computer." cmd: cd //share result: in ssh this works fine, in fabric I get "//share: Not a directory" Further information: uname -a and whoami return exact the same thing in fabric and ssh. I also tried things like mount, net use, but these commands all have kind of the same problem.

    Read the article

  • How do I fix a garbled screen on a Gateway LT3103u?

    - by paracaudex
    I've been having garbled screen problems on a Gateway LT3103u on Ubuntu for a while. I just did a fresh install of Ubuntu 11.10 and continue to have issues. I installed xubuntu-desktop in case the issues had to do with the sophisticated GNOME graphics. The problem is less bad, but it's still there. After a few minutes of using XFCE, the screen gets garbled. I assume this has something to do with the graphics card, but I don't know how to go about troubleshooting something like this. Where should I start? Update: Here is the description of the VGA card from lspci -vvv: 01:05.0 VGA compatible controller: ATI Technologies Inc RS690M [Radeon X1200 Series] (prog-if 00 [VGA controller]) Subsystem: Acer Incorporated [ALI] Device 028c Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast TAbort- SERR- [disabled] Capabilities: [50] Power Management version 2 Flags: PMEClk- DSI- D1+ D2+ AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-) Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME- Capabilities: [80] MSI: Enable- Count=1/1 Maskable- 64bit+ Address: 0000000000000000 Data: 0000 Kernel driver in use: radeon Kernel modules: radeon Update: Setting GRUB_CMDLINE_LINUX="nomodeset" in /etc/default/grub seems to have fixed it in both Ubuntu and xubuntu-desktop. I will test it for a day or so to see if the problems recur and then post more detail with some links to an explanation. Update 2: It is possible to use this fix for Nvidia card (GTX 260) when graphics is defective after 11.10 upgrade/install? First few restarts was graphic ok, then after few restarts begins suddenly be defective and it stay so. I must returned to 11.04 because this problem and I wait for 12.04. So I hope in this fix.

    Read the article

  • Is it possible to upgrade PHP to 5.3 on a Centos Kloxo installation?

    - by Malachi
    I have a VPS running Centos with Kloxo on and I was wondering how I would upgrade the PHP to 5.3 - It's currently running 5.2.6. When I try and do a yum update I get the following errors: Resolving Dependencies --> Running transaction check --> Processing Dependency: libpq.so.4 for package: lxphp ---> Package postgresql-libs.i386 0:8.3.7-umask.7 set to be updated --> Finished Dependency Resolution lxphp-5.2.1-400.i386 from installed has depsolving problems --> Missing Dependency: libpq.so.4 is needed by package lxphp-5.2.1-400.i386 (installed) Error: Missing Dependency: libpq.so.4 is needed by package lxphp-5.2.1-400.i386 (installed) You could try using --skip-broken to work around the problem You could try running: package-cleanup --problems package-cleanup --dupes rpm -Va --nofiles --nodigest The program package-cleanup is found in the yum-utils package. Any help would be greatly appreciated.

    Read the article

  • Hybrid gmail MX + postfix for local accounts

    - by krunk
    Here's the setup: We have a domain, mydomain.com. Everything is on our own server, except general email accounts which are through gmail. Currently gmail is set as the MX record. The server also has various email aliases it needs to support for bug trackers and such. e.g. [email protected] |/path/to/issuetracker.script I'm struggling with a setup that allows the following, both locally and from user's email clients. guser1 - has a gmail account and a local account guser2 - only has a gmail account bugs - has a pipe alias in /etc/aliases for issue tracker Scenarios mail to [email protected] from local host (crons and such) needs to go to gmail account mail to [email protected] from local host mail to [email protected] needs to be piped to the local issue tracker script So, the first stab was creating a transport map. In this scenario, the our server would be set as teh MX and guser* destined emails are sent to gmail. Put the gmail users in a map like so: [email protected] smtp:gmailsmtp:25 [email protected] smtp:gmailsmtp:25 Problems: Ignores extensions such as [email protected] Only works if append_at_myorigin = no (if set to yes, gmail refuses to connect with: E4C7E3E09BA3: to=, relay=none, delay=0.05, delays=0.02/0.01/0.02/0, dsn=4.4.1, status=deferred (connect to gmail-smtp-in.l.google.com[209.85.222.57]:25: Connection refused)) since append_at_myorigin is set to no, all received emails have (unknown sender) The second stab was to set explicit localhost aliases in /etc/aliases and do a domain wide forward on mydomain. This too requires setting the local server as the MX: root: root@localhost # transport mydomain.com smtp:gmailsmtp:25 Problems: * If I create a transport map for a domain that matches "$myhostname", the aliases file is never parsed. So when a local user (or daemon) sends an email like: mail -s "testing" root < text.txt Postfix ignores the /etc/alias entry and maps to [email protected] and attempts to send it to the gmail transport mapping. Third stab: Create a subdomain for the bugs, something like bugs.mydomain.com. Set the MX for this domain to local server and leave the MX for mydomain.com to the Gmail server. Problems: * Does not solve the issue with local accounts. So when the bug tracker responds to an email from [email protected], it uses a local transport and the user never receives the email. % postconf -n alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases append_at_myorigin = no append_dot_mydomain = no biff = no config_directory = /etc/postfix inet_interfaces = all mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 mydestination = $myhostname, localhost.$myhostname, localhost myhostname = mydomain.com mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 myorigin = /etc/mailname readme_directory = no recipient_delimiter = + relayhost = smtp_tls_cert_file = /etc/ssl/certs/kspace.pem smtp_tls_enforce_peername = no smtp_tls_key_file = /etc/ssl/certs/kspace.pem smtp_tls_note_starttls_offer = yes smtp_tls_scert_verifydepth = 5 smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtp_use_tls = yes smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU) smtpd_recipient_restrictions = permit_mynetworks, reject_invalid_hostname, reject_non_fqdn_sender, reject_non_fqdn_recipient, reject_unknown_sender_domain, reject_unknown_recipient_domain, reject_unauth_destination smtpd_tls_ask_ccert = yes smtpd_tls_req_ccert = no smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache tls_random_source = dev:/dev/urandom transport_maps = hash:/etc/postfix/transport

    Read the article

  • How do I get wireless and video working on a Sony Vaio vpc-z21x9r

    - by Alex
    OS: Ubuntu 11.10 Kernel: 2.6.37.6 Laptop: sony vaio vpc-z21x9r The problem: I can't change level of screen backlight; also can't enable wi-fi. While using 3.0 kernel everything is ok, but the laptop discharges too quickly. Results for lspci 02:00.0 Network controller: Intel Corporation Centrino Wireless-N + WiMAX 6150 (rev 5f) VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) there is no wlan0 in ifconfig -a too. I've fount that kernel should already have drivers for this graphics controller, but when i load up in restore mode, it writes "fall" while trying to load up some video drivers. To change the level of backlight i've alrady tried: xbacklight -set XX , smartdimmer -s XX and got this init_ nvclock() failed! also sudo setpci -s 00:02.0 F4.B=XX with no results. As for wi-fi, i've found driver but i don't understand how to install it. Here it is How can I fix these problems? Thank you for your help and time! Also tried to fix problems with backlight writing GRUB_CMDLINE_LINUX_DEFAULT="quiet splash nomodeset acpi_backlight=vendor Still no results. Is there any sense in recompiling kernel, I mean 2.6.37.6. Can it make any improvements in the situation?

    Read the article

  • Screen tearing with GeForce 550 Ti

    - by Konair0s
    Recently I switched to the new monitor and graphic card - DELL U2312HM and GeForce GTX 550 Ti. I have problems with screen tearing (like in this picture from Wikipedia). Usually it is somewhere in upper part of the screen. Mainly happens in videos (in flash videos tearing heavier). In games all fine, except in-game videos (sometimes even videos built on game engine), but gameplay itself is clear, even in very fast actions. Connection with DVI. Problems both in Linux (Debian GNU/Linux, openSUSE 12.1, Linux Mint 13) and Windows (Windows XP, Windows 7), with various driver versions. 1920x1080, 60Mhz. How can I resolve this?

    Read the article

  • A case for not installing your own software

    - by James Gentsch
    This week I watched some of the Oracle Open World presentations (from the comfort of my Oracle office) and happened on some of Larry Ellison’s comments about cloud computing and engineered systems.  Larry said he sees the move to these as analogous to the moves made by the original adopters of electricity.  The argument goes that the first consumers of electricity had to set up their own power plant.  Then, as the market and infrastructure for electricity matured, power consumers moved from using their own personal power plant to purchasing power from another entity that was focused on power production as their primary product. In the end this was a cheaper and more reliable solution. Now, there are lots of compelling reasons to be looking very seriously at cloud computing and engineered systems for enterprise application deployment.  However, speaking as a software developer of enterprise applications, the part of this that I really love (besides Larry’s early electricity adopter analogy) is that as a mode of application deployment it provides me and my customers a consistent environment in which the applications I am providing will be run.  This cuts way down on the environmental surprises that consistently lead to the hated “well, it works here” situation with the support desk. And just to be clear, I think I hate this situation more than my clients, who I think are happy that at least it is working somewhere.  I hate this because when a problem happens, and let’s face it customers are not wasting their time calling in easy problems, we are seriously disabled when we cannot reproduce the issue which is triggered by something unforeseen in the environment where the application is running.  This situation is incredibly frustrating and an all too often occurrence. I look selfishly forward to cloud computing and engineered systems dramatically reducing the occurrence of problems triggered by unforeseen environmental situations in the software I am responsible for.  I think this is an evolutionary game changer that will be a huge benefit to the reliability and consistent performance of the software for my customers, and may make “well, it works here” a well forgotten phase for future software developers. It may even impact the stress squeeze toy industry.  Well, maybe at least for my group.

    Read the article

< Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >