Search Results

Search found 18728 results on 750 pages for 'setup deployment'.

Page 187/750 | < Previous Page | 183 184 185 186 187 188 189 190 191 192 193 194  | Next Page >

  • QoS, Squid, Virgin superhub

    - by swiss196
    I'm a student and have just moved into a house of 6. We've setup 100mb broadband with Virgin and get quite good speeds on it so we're really pleased. However, as there are 6 of us, who all like to stream/game/download etc I was looking at setting up QoS to ensure speeds were fair/equal etc. I looked into it and found out that the Virgin Superhub can't do it out of the box, so my next option is to setup squid proxy on my machine (always on) to act as at transparent proxy and provide QoS etc. However, I'm not really sure how to do this and had a look on the Internet but couldn't see any guides as such! I was wondering if someone could give me some more and perhaps provider a summary of the steps involved - and let me know if this is possible with the SuperHub etc. Thanks, Dave

    Read the article

  • consequences of changing uid/gid on snow leopard

    - by Peter Carrero
    ok, so I introduced a Mac laptop to my home network of Kubuntu hosts and Fedora servers. Currently I don't have NIS or LDAP setup (I got only 2 users) and I just manually setup the UID/GID on the hosts. I would like to run the following command on my Macbook: dscl . -change /Users/me UniqueID 501 1000 dscl . -change /Users/me PrimaryGroupID 20 503 chown -R 1000:503 /Users/me dscl . append /Groups/staff GroupMembership me Before I go on to hose my new Mac, I would like to know if this is the right thing to do and, if so, what are the adverse consequences I may have. Thanks.

    Read the article

  • MySQL cluster: Error after 1 data node is shutdown and started again

    - by nitins
    MySQL cluster: Error after 1 data node is shutdown and started again. We have configured MySQL cluster(version 7.1) with 2 sql/data nodes. We are using table space instead of in-memory clustering. The setup was working fine. So to test the setup I shutdown one data node, updated a table and and again stated the stopped node. Its giving this error and not starting. Any ideas ? Forced node shutdown completed. Occured during startphase 5. Caused by error 2306: 'Pointer too large(Internal error, programming error or missing error message, please report a bug). Temporary error, restart node'.

    Read the article

  • Is real-time or synchronous replication possible over WAN link?

    - by johnnyb10
    The company I work for is looking to implement truly real-time file replication with file locking over a WAN link that spans over 2000 miles. We currently have a 16-drive SAN setup in our east coast office. We also have an office out in Colorado that will have the same exact SAN setup. The idea is to have those two SANs contain the same exact data at all times, which will allow us to work with the same data pool, and which will also provide use with an offsite backup solution, should a failure occur on either end. We're running Server 2008. The objective is to enable users in the east coast office to work on files and have those changes be instantly updated on the Colorado SAN as well. We also need there to be file locking so that there will be no conflicts or overwritten changes if users attempt to work on the same file. Is this scenario even possible, at speeds that would make the files usable? And if so, what software would we need to pull this off? As I understand it, DFS-R does not provide file locking, so if we used that, we would need to go with a third-party product like Peerlock. But I don't even know if DFS-R is an option. Can it replicate quickly enough over a WAN link? Can any product? It seems that if we were to use synchronous replication, the programs would be unacceptably slow, as every write would have to wait for confirmation from the other end of the link. But if we used asynchronous replication, what kind of latency would we be looking at? There is a product from GlobalScape called WAFS that claims to provide "File coherence with real-time file locking, file release, and synchronization" and says that "As files are modified, changes are mirrored instantly using intelligent byte-level differencing to minimize the impact on network bandwidth". So this sounds like synchronous replication, but that doesn't even seem possible, given physical limitations such as the speed of light. If anyone has any experience with this kind of setup, or knows whether it's even possible, I'd appreciate your input and suggestions, including recommendations for software that we should check out.

    Read the article

  • SVN Checkout URL - fresh install

    - by Webnet
    I just setup SVN on a server that is running Ubuntu server as a fresh install. I've got it up and running but am having difficult determining how to connect to it. I'm trying to do an import using the local IP address: http://IP/RepositoryName but it's saying it can't resolve the IP. I'm wondering if there's something on the server I need to setup. I have not modified dav_svn.conf because there is another server here that is running SVN (I'm migrating it to a new server) and it's dav_svn.conf is not modified. The current working SVN has a subdomain associated with the IP location of the server but doesn't do anything special with the ports as far as I can tell. I'm getting this error via RapidSVN when I try to import... Error: Error while performing action: OPTIONS of 'http://IP/RepositoryName': could not connect to server (http://IP) Any help would be appreciated

    Read the article

  • iptables to play nice with tor and ntpd

    - by directedition
    I'm setting up a server to operate as a tor relay and nothing else. I setup iptables to only allow talk on port 9001 and it worked fine, but there was an issue, the clock needs to be properly set and maintained for the relay to work properly, so I needed ntpd setup and running, but for some reason I can't get iptables to work as I want it. I'm trying to have it allow only tor and ntpd to talk over the network, but when I set it up to allow port 123 using udp, suddenly it ignores my -A OUTPUT ! -s 127.0.0.1 -j DROP and allows everything through. How should I go about this? Please excuse my ignorance, I've brand new to iptables.

    Read the article

  • Kill all the project files!

    - by jamiet
    Like many folks I’m a keen podcast listener and yesterday my commute was filled by listening to Scott Hunter being interviewed on .Net Rocks about the next version of ASP.Net. One thing Scott said really struck a chord with me. I don’t remember the full quote but he was talking about how the ASP.Net project file (i.e. the .csproj file) is going away. The rationale being that the main purpose of that file is to list all the other files in the project, and that’s something that the file system is pretty good at. In Scott’s own words (that someone helpfully put in the comments): A file that lists files is really redundant when the OS already does this Romeliz Valenciano correctly pointed out on Twitter that there will still be a project.json file however no longer will there be a need to keep a list of files in a project file. I suspect project.json will simply contain a list of exclusions where necessary rather than the current approach where the project file is a list of inclusions. On the face of it this seems like a pretty good idea. I’ve long been a fan of convention over configuration and this is a great example of that. Instead of listing all the files in a separate file, just treat all the files in the directory as being part of the project. Ostensibly the approach is if its in the directory, its part of the project. Simple. Now I’m not an ASP.net developer, far from it, but it did occur to me that the same approach could be applied to the two Visual Studio project types that I am most familiar with, SSIS & SSDT. Like many people I’ve long been irritated by SSIS projects that display a faux file system inside Solution Explorer. As you can see in the screenshot below the project has Miscellaneous and Connection Managers folders but no such folders exist on the file system: This may seem like a minor thing but it means useful Solution Explorer features like Show All Files and Open Folder in Windows Explorer don’t work and quite frankly it makes me feel like a second class citizen in the Microsoft ecosystem. I’m a developer, treat me like one. Don’t try and hide the detail of how a project works under the covers, show it to me. I’m a big boy, I can handle it! Would it not be preferable to simply treat all the .dtsx files in a directory as being part of a project? I think it would, that’s pretty much all the .dtproj file does anyway (that, and present things in a non-alphabetic order – something else that wildly irritates me), so why not just get rid of the .dtproj file? In the case of SSDT the .sqlproj actually does a whole lot more than simply list files because it also states the BuildAction of each file (Build, NotInBuild, Post-Deployment, etc…) but I see no reason why the convention over configuration approach can’t help us there either. Want to know which is the Post-deployment script? Well, its the one called Post-DeploymentScript.sql! Simple! So that’s my new crusade. Let’s kill all the project files (well, the .dtproj & .sqlproj ones anyway). Are you with me? @Jamiet

    Read the article

  • Shrink Partition on Production Server

    - by Campo
    SO our production server was only setup with one large partition. I have setup a standby server and properly partitioned it. Now the boss wants the production environment's partition shrunk. It is an HP DL380 G5 We have 4 hot swap drives in a raid 5. How best should I go about doing this. Seems like a bad idea to me. Should I use windows or HP to do the partitioning? What should I be aware of in a production environment? The idea is to put the site (Inetpub) on a separate partition instead of the C: drive. How much downtime should I expect? Is this a terrible idea? Anything else I have missed?

    Read the article

  • How do I properly set up and secure a production LAMP Server?

    - by Niklas
    It's very hard to find comprehensive information on this subject. Either I found short tutorials on how you perform the installation, as simple as "apt-get install apache2", or outdated tutorials. So I was hoping I could get some professional information from my fellow members of the Ubuntu community :D I have performed a normal Ubuntu Server 11.04 with LAMP, SAMBA and SSH installed through the system installation. But I'm having some trouble setting up virtual hosts and to make the system secure enough to expose the server to the web. I've somewhat followed this tutorial this far. I have 3 sites in /etc/apache2/sites-available which all looks like this except for different site names: <VirtualHost example.com> ServerAdmin webmaster@localhost ServerAlias www.edunder.se DocumentRoot /var/www/sites/example CustomLog /var/log/apache2/www.example.com-access.log combined </VirtualHost> And I have enabled them with the command a2ensite so I have symbolic links in /etc/apache2/sites-enabled. My /etc/hosts file has these lines: 127.0.0.1 localhost 127.0.1.1 Ubuntu.lan Ubuntu 127.0.0.1 localhost.localdomain localhost example.com www.example.com 127.0.0.1 localhost.localdomain localhost example2.com www.example2.com 127.0.0.1 localhost.localdomain localhost example3.com www.example3.com And I can only access one of them from the browser (I have lynx installed on the server for testing purposes) so I guess I haven't set them up properly :) How should I proceed to get a secure and proper setup? I also use MySQL and I think that this tutorial will be enough to set up SSH securely. Please help me understanding Apache configuration better since I'm new to setting up my own server (I've only run XAMPP earlier) and please advise regarding how I should setup a firewall as well :D

    Read the article

  • Windows 7 Internet Sharing - How to have simultaneous Internet Access to my client

    - by Marl
    The condition: I'm running on windows 7, I'm using a usb broad band for my computer, then my computer is connected to the router tp-link tl-wr340g (in this sense my computer is the internet source since my router has no usb port for this type of broad band). I set the broad band to have internet sharing. I got 3-4 client connected through the router. The problem is whenever a client is using the internet, other clients including me don't have internet connection, additionally, If I have the internet access other clients don't have internet access. In my setup in windows XP (bridging the broad band and the router network) it works perfectly fine, every one has simultaneous internet. To clarify, how can I have all clients including me have internet connection simultaneously in my windows 7 OS? //Additionally, the create "network bridge" setup is missing, from this link the "Bridge Connection is missing", how can I fix that?

    Read the article

  • Visual Studio 2010 Best Practices

    - by Etienne Tremblay
    I’d like to thank Packt for providing me with a review version of Visual Studio 2010 Best Practices eBook. In fairness I also know the author Peter having seen him speak at DevTeach on many occasions.  I started by looking at the table of content to see what this book was about, knowing that “best practices” is a real misnomer I wanted to see what they were.  I really like the fact that he starts the book by really saying they are not really best practices but actually recommend practices.  As a Team Foundation Server user I found that chapter 2 was more for the open source crowd and I really skimmed it.  The portion on Branching was well documented, although I’m not a fan of the testing branch myself, but the rest was right on. The section on merge remote changes (bring the outside to you) paradigm is really important and was touched on. Chapter 3 has good solid practices on low level constructs like generics and exceptions. Chapter 4 dives into architectural practices like decoupling, distributed architecture and data based architecture.  DTOs and ORMs are touched on briefly as is NoSQL. Chapter 5 is about deployment and is really a great primer on all the “packaging” technologies like Visual Studio Setup and Deployment (depreciated in 2012), Click Once and WIX the major player outside of commercial solutions.  This is a nice section on how to move from VSSD to WIX this is going to be important in the coming years due to the fact that VS 2012 doesn’t support VSSD. In chapter 6 we dive into automated testing practices, including test coverage, mocking, TDD, SpecDD and Continuous Testing.  Peter covers all those concepts really nicely albeit succinctly. Being a book on recommended practices I find this is really good. I really enjoyed chapter 7 that gave me a lot of great tips to enhance my Visual Studio “experience”.  Tips on organizing projects where good.  Also even though I knew about configurations I like that he put that in there so you can move all your settings to another machine, a lot of people don’t know about that. Quick find and Resharper are also briefly covered.  He touches on macros (depreciated in 2012).  Finally he touches on Continuous Integration a very important concept in today’s ALM landscape. Chapter 8 is all about Parallelization, threads, Async, division of labor, reactive extensions.  All those concepts are touched on and again generalized approaches to those modern problems are giving.       Chapter 9 goes into distributed apps, the most used and accepted practice in the industry for .NET projects the chapter tackles concepts like Scalability, Messaging and Cloud (the flavor of the month of distributed apps, although I think this will stick ;-)).  He also looks a protocols TCP/UDP and how to debug distributed apps.  He touches on logging and health monitoring. Chapter 10 tackles recommended practices for web services starting with implementing WCF services, which goes into all sort of goodness like how to host in IIS or self-host.  How to manual test WCF services, also a section on authentication and authorization.  ASP.NET Web services are also touched on in that chapter All in all a good read, nice tips and accepted practices.  I like the conciseness of the subjects and Peter touches on a lot of things in this book and uses a lot of the current technologies flavors to explain the concepts.   Cheers, ET

    Read the article

  • Constant game speed independent of variable FPS in OpenGL with GLUT?

    - by Nazgulled
    I've been reading Koen Witters detailed article about different game loop solutions but I'm having some problems implementing the last one with GLUT, which is the recommended one. After reading a couple of articles, tutorials and code from other people on how to achieve a constant game speed, I think that what I currently have implemented (I'll post the code below) is what Koen Witters called Game Speed dependent on Variable FPS, the second on his article. First, through my searching experience, there's a couple of people that probably have the knowledge to help out on this but don't know what GLUT is and I'm going to try and explain (feel free to correct me) the relevant functions for my problem of this OpenGL toolkit. Skip this section if you know what GLUT is and how to play with it. GLUT Toolkit: GLUT is an OpenGL toolkit and helps with common tasks in OpenGL. The glutDisplayFunc(renderScene) takes a pointer to a renderScene() function callback, which will be responsible for rendering everything. The renderScene() function will only be called once after the callback registration. The glutTimerFunc(TIMER_MILLISECONDS, processAnimationTimer, 0) takes the number of milliseconds to pass before calling the callback processAnimationTimer(). The last argument is just a value to pass to the timer callback. The processAnimationTimer() will not be called each TIMER_MILLISECONDS but just once. The glutPostRedisplay() function requests GLUT to render a new frame so we need call this every time we change something in the scene. The glutIdleFunc(renderScene) could be used to register a callback to renderScene() (this does not make glutDisplayFunc() irrelevant) but this function should be avoided because the idle callback is continuously called when events are not being received, increasing the CPU load. The glutGet(GLUT_ELAPSED_TIME) function returns the number of milliseconds since glutInit was called (or first call to glutGet(GLUT_ELAPSED_TIME)). That's the timer we have with GLUT. I know there are better alternatives for high resolution timers, but let's keep with this one for now. I think this is enough information on how GLUT renders frames so people that didn't know about it could also pitch in this question to try and help if they fell like it. Current Implementation: Now, I'm not sure I have correctly implemented the second solution proposed by Koen, Game Speed dependent on Variable FPS. The relevant code for that goes like this: #define TICKS_PER_SECOND 30 #define MOVEMENT_SPEED 2.0f const int TIMER_MILLISECONDS = 1000 / TICKS_PER_SECOND; int previousTime; int currentTime; int elapsedTime; void renderScene(void) { (...) // Setup the camera position and looking point SceneCamera.LookAt(); // Do all drawing below... (...) } void processAnimationTimer(int value) { // setups the timer to be called again glutTimerFunc(TIMER_MILLISECONDS, processAnimationTimer, 0); // Get the time when the previous frame was rendered previousTime = currentTime; // Get the current time (in milliseconds) and calculate the elapsed time currentTime = glutGet(GLUT_ELAPSED_TIME); elapsedTime = currentTime - previousTime; /* Multiply the camera direction vector by constant speed then by the elapsed time (in seconds) and then move the camera */ SceneCamera.Move(cameraDirection * MOVEMENT_SPEED * (elapsedTime / 1000.0f)); // Requests to render a new frame (this will call my renderScene() once) glutPostRedisplay(); } void main(int argc, char **argv) { glutInit(&argc, argv); (...) glutDisplayFunc(renderScene); (...) // Setup the timer to be called one first time glutTimerFunc(TIMER_MILLISECONDS, processAnimationTimer, 0); // Read the current time since glutInit was called currentTime = glutGet(GLUT_ELAPSED_TIME); glutMainLoop(); } This implementation doesn't fell right. It works in the sense that helps the game speed to be constant dependent on the FPS. So that moving from point A to point B takes the same time no matter the high/low framerate. However, I believe I'm limiting the game framerate with this approach. Each frame will only be rendered when the time callback is called, that means the framerate will be roughly around TICKS_PER_SECOND frames per second. This doesn't feel right, you shouldn't limit your powerful hardware, it's wrong. It's my understanding though, that I still need to calculate the elapsedTime. Just because I'm telling GLUT to call the timer callback every TIMER_MILLISECONDS, it doesn't mean it will always do that on time. I'm not sure how can I fix this and to be completely honest, I have no idea what is the game loop in GLUT, you know, the while( game_is_running ) loop in Koen's article. But it's my understanding that GLUT is event-driven and that game loop starts when I call glutMainLoop() (which never returns), yes? I thought I could register an idle callback with glutIdleFunc() and use that as replacement of glutTimerFunc(), only rendering when necessary (instead of all the time as usual) but when I tested this with an empty callback (like void gameLoop() {}) and it was basically doing nothing, only a black screen, the CPU spiked to 25% and remained there until I killed the game and it went back to normal. So I don't think that's the path to follow. Using glutTimerFunc() is definitely not a good approach to perform all movements/animations based on that, as I'm limiting my game to a constant FPS, not cool. Or maybe I'm using it wrong and my implementation is not right? How exactly can I have a constant game speed with variable FPS? More exactly, how do I correctly implement Koen's Constant Game Speed with Maximum FPS solution (the fourth one on his article) with GLUT? Maybe this is not possible at all with GLUT? If not, what are my alternatives? What is the best approach to this problem (constant game speed) with GLUT? I originally posted this question on Stack Overflow before being pointed out about this site. The following is a different approach I tried after creating the question in SO, so I'm posting it here too. Another Approach: I've been experimenting and here's what I was able to achieve now. Instead of calculating the elapsed time on a timed function (which limits my game's framerate) I'm now doing it in renderScene(). Whenever changes to the scene happen I call glutPostRedisplay() (ie: camera moving, some object animation, etc...) which will make a call to renderScene(). I can use the elapsed time in this function to move my camera for instance. My code has now turned into this: int previousTime; int currentTime; int elapsedTime; void renderScene(void) { (...) // Setup the camera position and looking point SceneCamera.LookAt(); // Do all drawing below... (...) } void renderScene(void) { (...) // Get the time when the previous frame was rendered previousTime = currentTime; // Get the current time (in milliseconds) and calculate the elapsed time currentTime = glutGet(GLUT_ELAPSED_TIME); elapsedTime = currentTime - previousTime; /* Multiply the camera direction vector by constant speed then by the elapsed time (in seconds) and then move the camera */ SceneCamera.Move(cameraDirection * MOVEMENT_SPEED * (elapsedTime / 1000.0f)); // Setup the camera position and looking point SceneCamera.LookAt(); // All drawing code goes inside this function drawCompleteScene(); glutSwapBuffers(); /* Redraw the frame ONLY if the user is moving the camera (similar code will be needed to redraw the frame for other events) */ if(!IsTupleEmpty(cameraDirection)) { glutPostRedisplay(); } } void main(int argc, char **argv) { glutInit(&argc, argv); (...) glutDisplayFunc(renderScene); (...) currentTime = glutGet(GLUT_ELAPSED_TIME); glutMainLoop(); } Conclusion, it's working, or so it seems. If I don't move the camera, the CPU usage is low, nothing is being rendered (for testing purposes I only have a grid extending for 4000.0f, while zFar is set to 1000.0f). When I start moving the camera the scene starts redrawing itself. If I keep pressing the move keys, the CPU usage will increase; this is normal behavior. It drops back when I stop moving. Unless I'm missing something, it seems like a good approach for now. I did find this interesting article on iDevGames and this implementation is probably affected by the problem described on that article. What's your thoughts on that? Please note that I'm just doing this for fun, I have no intentions of creating some game to distribute or something like that, not in the near future at least. If I did, I would probably go with something else besides GLUT. But since I'm using GLUT, and other than the problem described on iDevGames, do you think this latest implementation is sufficient for GLUT? The only real issue I can think of right now is that I'll need to keep calling glutPostRedisplay() every time the scene changes something and keep calling it until there's nothing new to redraw. A little complexity added to the code for a better cause, I think. What do you think?

    Read the article

  • IIS in VirtualBox serving files from shared Ubuntu folder

    - by queen3
    Here's the problem: I want to use Ubuntu. But I need to develop ASP.NET (MVC) sites. So I setup VirtualBox with Win2003 and IIS6. But I would prefer my working files to be located in my Ubuntu home folder. So I setup shared folder in VirtualBox and make IIS6 virtual directory work from there. The problem is, IIS6 can't do this. Whatever I try (mapped drive, network uri path) I get different IIS errors: can't access folder (for mapped drive), can't monitor file system changes (\vboxsvr share path), and so on. Is there a way for IIS6 in virtual machine to configure virtual application folder to be on host machine (Ubuntu) - be it shared folder, mapped drive, smb share, or whatever?

    Read the article

  • cloud/grid computing

    - by tom smith
    Hi guys. I'm appologizing in advance to the guys who will tell me this isn't a tech/server/IT issue! But I've been beating my head around this for a couple of days now. I'm trying figure out who to talk to, or which company I can approach to try to see if there are Grid/Cloud Computing companies who have programs setup to deal with colleges. I'm dealing with a compsci course, and we're looking at a few projects that would require a great deal of computing/computational resources. But in calling different companies (HP/Rackspace/etc..) I'm either not getting through to the right depts, or to the right people, or the companies just aren't setup for this. There are plenty of companies who have discounts for desktop software/hardware, but who in the biz deals with discounts/offerings for Cloud/Grid Computing solutions?? Any thoughts/pointers would be greatly appreciated. Thanks -tom

    Read the article

  • DAO/Webservice Consumption in Web Application

    - by Gavin
    I am currently working on converting a "legacy" web-based (Coldfusion) application from single data source (MSSQL database) to multi-tier OOP. In my current system there is a read/write database with all the usual stuff and additional "read-only" databases that are exported daily/hourly from an Enterprise Resource Planning (ERP) system by SSIS jobs with business product/item and manufacturing/SCM planning data. The reason I have the opportunity and need to convert to multi-tier OOP is a newer more modern ERP system is being implemented business wide that will be a complete replacement. This newer ERP system offers several interfaces for third party applications like mine, from direct SQL access to either a dotNet web-service or a SOAP-like web-service. I have found several suitable frameworks I would be happy to use (Coldspring, FW/1) but I am not sure what design patterns apply to my data access object/component and how to manage the connection/session tokens, with this background, my question has the following three parts: Firstly I have concerns with moving from the relative safety of a SSIS job that protects me from downtime and speed of the ERP system to directly connecting with one of the web services which I note seem significantly slower than I expected (simple/small requests often take up to a whole second). Are there any design patterns I can investigate/use to cache/protect my data tier? It is my understanding data access objects (the component that connects directly with the web services and convert them into the data types I can then work with in my Domain Objects) should be singletons (and will act as an Adapter/Facade), am I correct? As part of the data access object I have to setup a connection by username/password (I could set up multiple users and/or connect multiple times with this) which responds with a session token that needs to be provided on every subsequent request. Do I do this once and share it across the whole application, do I setup a new "connection" for every user of my application and keep the token in their session scope (might quickly hit licensing limits), do I set the "connection" up per page request, or is there a design pattern I am missing that can manage multiple "connections" where a requests/access uses the first free "connection"? It is worth noting if the ERP system dies I will need to reset/invalidate all the connections and start from scratch, and depending on which web-service I use might need manually close the "connection/session"

    Read the article

  • Openfiler RAID 10 option not found

    - by chrisling106
    Hi, I'm building a NAS using Openfiler2.3 (from 32-bit ISO), first of all I want to experiment it on VM first before going out and buy the harddrives needed. I created 5 virtual drives on VMware, sda is 2GB and the rest 1GB each (sdb to sde). I left sda blank and want to setup a RAID 10 disk using sdb, sdc, sdd and sde, 4 RAID partitions are setup successfully, but when I try to create a RAID device the only option for RAID level is 1, 0, 5 and 6. RAID 10 is not there! Can someone let me know what have I missed, please? TIA.

    Read the article

  • Multiple audio sources on a single gameObject in unity

    - by angryInsomniac
    So, I have an audio system set up wherein I have loaded all my audio clips centrally and play them on demand by passing the requesting audioSource into the sound manager. However, there is a complication wherein if I want to overlay multiple looping sounds, I need to have multiple audio sources on an object, which is fine , so I created two in my script instantiated them and played my clips on them and then the world went crazy. For some reason, when I create two audio Sources in an object only the latest one is ever used, even if I explicitly keep objects separated, playing a clip on one or the other plays the clip on the last one that was created, furthermore, either this last one is not created in the right place or somehow messes with the rolloff rules because I can hear it all across my level, havign just one source works fine, but putting a second one on it causes shit to go batshit insane. Does anyone know the reason / solution for this ? Some pseudocode : guardSoundsSource = (AudioSource)gameObject.AddComponent("AudioSource"); guardSoundsSource.name = "Guard_Sounds_source"; // Setup this source guardThrusterSource = (AudioSource)gameObject.AddComponent("AudioSource"); guardThrusterSource.name = "Guard_Thruster_Source"; // setup this source // play using custom Sound manager soundMan.soundMgr.playOnSource(guardSoundsSource,"Guard_Idle_loop" ,true,GameManager.Manager.PlayerType); // this method prints out the name of the source the sound was to be played on and it always shows "Guard_Thruster_Source" even on the "Guard_Idle_loop" even though I clearly told it to use "Guard_Sounds_source"

    Read the article

  • How Ubuntu cloud version enforces the "no root login" over ssh ?

    - by Maxim Veksler
    Hello, I'm looking to tweak ubuntu cloud version default setup where is denies root login. Attempting to connect to such machine yields: maxim@maxim-desktop:~/workspace/integration/deployengine$ ssh [email protected] The authenticity of host 'ec2-204-236-252-95.compute-1.amazonaws.com (204.236.252.95)' can't be established. RSA key fingerprint is 3f:96:f4:b3:b9:4b:4f:21:5f:00:38:2a:bb:41:19:1a. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'ec2-204-236-252-95.compute-1.amazonaws.com' (RSA) to the list of known hosts. Please login as the ubuntu user rather than root user. Connection to ec2-204-236-252-95.compute-1.amazonaws.com closed. I would like to know where this is setup and how I can change the printed message? Thank you, Maxim.

    Read the article

  • Powershell: Install-dotNET4 function

    - by marc dekeyser
    This function will download and install ,NET 4.0. It uses the Get-Framework-Versions function to determine if the installation is necessary or not. Internet Connectivity will be required as the script auto downloads the setup file (and sleeps for 360 seconds... I had a function in there to monitor for install completion at first, turns out the setup file spawns so many childprocesses the function just got confused and locked up -_-)Alternatively you could drop the installation file in the folder specified on the $folderPath variable too. That will skip the download and use the file. This function easily adapts in to other versions f.e. I use it for Powershell 3 installs as well!Function install-dotNet4 () {    if(($InstalledDotNET -eq "4.0") -or ($InstalledDotNET -eq "4.0c")){        write-host ".NET 4.0 Framework is already installed" -foregroundcolor Green    } else{            #set a var for the folder you are looking for        $folderPath = 'C:\Temp'        #Check if folder exists, if not, create it        if (Test-Path $folderpath){            Write-Host "The folder $folderPath exists." -ForeGroundColor Green        } else{            Write-Host "The folder $folderPath does not exist, creating..." -NoNewline -ForegroundColor Red            New-Item $folderpath -type directory | Out-Null            Write-Host " - done!" -ForegroundColor Green        }        # Check if file exists, if not, download it        $file = $folderPath+"\dotNetFx40_Full_x86_x64.exe"        if (Test-Path $file){            write-host "The file $file exists." -ForeGroundColor Green        } else {            #Download Microsoft .Net 4.0 Framework            Write-Host "Downloading Microsoft .Net 4.0 Framework..." -nonewline -ForeGroundColor DarkYellow            $clnt = New-Object System.Net.WebClient            $url = "http://download.microsoft.com/download/9/5/A/95A9616B-7A37-4AF6-BC36-D6EA96C8DAAE/dotNetFx40_Full_x86_x64.exe"            $clnt.DownloadFile($url,$file)            Write-Host " - done!" -ForegroundColor Green        }        #Install Microsoft .Net Framework        Write-Host "Installing Microsoft .Net Framework..." -nonewline -ForegroundColor DarkYellow        $dotNET4 = $folderPath+"\dotNetFx40_Full_x86_x64.exe /quiet /norestart"        Invoke-Expression $dotNET4        write-host " - done!" -ForegroundColor Green        start-sleep -seconds 360    }}

    Read the article

  • Building a new PC, Installing XP, blue screen of death

    - by Tim
    I got a gigabyte barebones kit and am installing windows-XP(SP1) and the initial setup works, then it restarts and goes into the second phase of the setup. Then when installing components (I think that's what it says) it gets half way done and comes up with a blue screen saying IRQL_NOT_LESS_OR_EQUAL. BUT! I had gotten past that by installing windows-XP media center addition Now I am trying to install the drivers for my Asus ATI Radeon 5770 graphics card and I get another blue screen of death that doesnt give much info something about address x0000005. Would you think there is something wrong with something in my system or do you think if I got windows 7 that would take care of things? Sorry for probably not giving enough info. Here is what I have MotherBoard - Gigabyte S-series GA-H55M-S2(v) PSU - Ultra 500 watt atx HDD - Sata serial ATA Seagate Baracudda 7200 CPU - Intel i3 Memory - 4gig crucial Graphics Card - Asus ATI Radeon 5770 1Gig DDR5

    Read the article

  • Tuning Default WorkManager - Advantages and Disadvantages

    - by Murali Veligeti
    Before discussing on Tuning Default WorkManager, lets have a brief introduction on What is Default WorkManger Before Weblogic Server 9.0 release, we had the concept of Execute Queues. WebLogic Server (before WLS 9.0), processing was performed in multiple execute queues. Different classes of work were executed in different queues, based on priority and ordering requirements, and to avoid deadlocks. In addition to the default execute queue, weblogic.kernel.default, there were pre-configured queues dedicated to internal administrative traffic, such as weblogic.admin.HTTP and weblogic.admin.RMI.Users could control thread usage by altering the number of threads in the default queue, or configure custom execute queues to ensure that particular applications had access to a fixed number of execute threads, regardless of overall system load. From WLS 9.0 release onwards WebLogic Server uses is a single thread pool (single thread pool which is called Default WorkManager), in which all types of work are executed. WebLogic Server prioritizes work based on rules you define, and run-time metrics, including the actual time it takes to execute a request and the rate at which requests are entering and leaving the pool.The common thread pool changes its size automatically to maximize throughput. The queue monitors throughput over time and based on history, determines whether to adjust the thread count. For example, if historical throughput statistics indicate that a higher thread count increased throughput, WebLogic increases the thread count. Similarly, if statistics indicate that fewer threads did not reduce throughput, WebLogic decreases the thread count. This new strategy makes it easier for administrators to allocate processing resources and manage performance, avoiding the effort and complexity involved in configuring, monitoring, and tuning custom executes queues. The Default WorkManager is used to handle thread management and perform self-tuning.This Work Manager is used by an application when no other Work Managers are specified in the application’s deployment descriptors. In many situations, the default Work Manager may be sufficient for most application requirements. WebLogic Server’s thread-handling algorithms assign each application its own fair share by default. Applications are given equal priority for threads and are prevented from monopolizing them. The default work-manager, as its name tells, is the work-manager defined by default.Thus, all applications deployed on WLS will use it. But sometimes, when your application is already in production, it's obvious you can't take your EAR / WAR, update the deployment descriptor(s) and redeploy it.The default work-manager belongs to a thread-pool, as initial thread-pool comes with only five threads, that's not much. If your application has to face a large number of hits, you may want to start with more than that.Well, that's quite easy. You have  two option to do so.1) Modify the config.xmlJust add the following line(s) in your server definition : <server> <name>AdminServer</name> <self-tuning-thread-pool-size-min>100</self-tuning-thread-pool-size-min> <self-tuning-thread-pool-size-max>200</self-tuning-thread-pool-size-max> [...] </server> 2) Adding some JVM parameters Add the following system property in setDomainEnv.sh/setDomainEnv.cmd or startWebLogic.sh/startWebLogic.cmd : -Dweblogic.threadpool.MinPoolSize=100 -Dweblogic.threadpool.MaxPoolSize=100 Reboot WLS and see the option has been taken into account . Disadvantage: So far its fine. But here there is an disadvantage in tuning Default WorkManager. Internally Weblogic Server has many work managers configured for different types of work.  if we run out of threads in the self-tuning pool(because of system property -Dweblogic.threadpool.MaxPoolSize) due to being undersized, then important work that WLS might need to do could be starved.  So, while limiting the self-tuning would limit the default WorkManager and internally it also limits all other internal WorkManagers which WLS uses.So the best alternative is to override the default WorkManager that means creating a WorkManager for the Application and assign the WorkManager for the application instead of tuning the Default WorkManager.

    Read the article

  • Planning home network

    - by gakhov
    I'm planning to setup my home network from scratch and want to ask professional opinions or tips. My home is connected to Internet with a cable connection (100 Mb/s). The devices I would like to connect are VoIP phone (RJ-45), TV (WiFi/LAN), 3 laptops (WiFi), 2 smartphones (WiFi), an iPad (WiFi), a Kindle (WiFi), a network printer and, probably, a home media storage (WiFi/LAN). As you can see, the most load will be on WiFi connections (probably, even if TV supports WiFi it's better to connect it by LAN?). So, I need help to choose the best router (or combination of routers) to support stable connections for all these devices and minimize the total number of routers/adapters. I like how Cisco/Linksys devices were working for me in the past, so preferably (but not obligatorily) I want to setup network with their solutions. Any thoughts?

    Read the article

  • Logstash shipper & server on the samebox

    - by keftes
    I'm trying to setup a central logstash configuration. However I would like to be sending my logs through syslog-ng and not third party shippers. This means that my logstash server is accepting via syslog-ng all the logs from the agents. I then need to install a logstash process that will be reading from /var/log/syslog-clients/* and grabbing all the log files that are sent to the central log server. These logs will then be sent to redis on the same VM. In theory I need to also configure a second logstash process that will read from redis and start indexing the logs and send them to elasticsearch. My question: Do I have to use two different logstash processes (shipper & server) even if I am in the same box (I want one log server instance)? Is there any way to just have one logstash configuration and have the process read from syslog-ng --- write to redis and also read from redis --- output to elastic search ? Diagram of my setup: [client]-------syslog-ng--- [log server] ---syslog-ng <----logstash-shipper --- redis <----logstash-server ---- elastic-search <--- kibana

    Read the article

  • Thomson TG585v7 router - promiscuous mode

    - by Nikita
    I have a TG585v7 as a router with several machines plugged into it. In the default setup, the packets are only delivered to the specific machine but I want to be able to setup to monitor all network traffic on one of the machines, i.e. I need those packets to be picked up when my ethernet card is in promiscuous mode. Is this possible? Guide here has this "mcastpromisc Make the IP interface multicast promiscuous. OPTIONAL", is this what I am looking for? Does it mean I need to manually add all my machines by their MAC addresses to be able to receive packets destined for them? Or am I out of luck and I need to get a better router?

    Read the article

  • setting up rhel 5.x RPM build server for mortal users

    - by Chen Levy
    My task is to setup a RHEL 5.x build host, that can build RPMs for mortal users. On RHEL 6.x with rpm version 4.8, I have in /usr/lib/macros: # Path to top of build area. %_topdir %{getenv:HOME}/rpmbuild On RHEL 5.x with rpm version 4.4, the %{getevn:HOME} is not available. I know that I can use /home/someuser/.rpmmacros: %_topdir /home/someuser/rpmbuild and this will work for that user, however I don't want to do this for every user separately. Moreover, since .rpmmacro will not expand ${HOME} or ~ I suspect it is unsafe to use those. This in turn make /etc/skel unstable for this task (or so I suspect). So in short, my question is: How to setup RHEL 5.x host that allow all users to build RPM packages in their home directory?

    Read the article

< Previous Page | 183 184 185 186 187 188 189 190 191 192 193 194  | Next Page >