Search Results

Search found 16903 results on 677 pages for 'single responsibility'.

Page 350/677 | < Previous Page | 346 347 348 349 350 351 352 353 354 355 356 357  | Next Page >

  • Switching over an email address from a distribution group to a user account in Exchange 2003

    - by Sevdarkseed
    I'm currently in a transition mode. We currently have a Distribution Group called Quotes and Orders that send out emails to several users. I'm told that a better method would be to create a user and then give access to that fictitious user's email account so that everyone would be able to see everything that goes out and is responded to in a single account. However, I'm not sure what the best method would be for creating an account and shutting down the distribution group. I'm thinking more along the lines of the steps that considered best practice to remove the email account from the distribution group and attach it to the user account. Any thoughts?

    Read the article

  • Transferring filesystem-structured music collection to ipod

    - by ansgri
    I have a rather large music collection organized like music/<artist>/<album>/<track>-<title>.<fmt>, mostly mp3. However, the tagging is rather inconsistent, as on the PC or with better old players (Cowon D2+) I don't care and use the filesystem view. However, in the iTunes this all gets messed up because it doesn't care about file locations and looks at tags. What's worse though, it consistently splits compilations into single-track artist-albums. So, is there a way to take the existing filesystem artist-album structure and bring it to the form compatible with iTunes/iPod? Again, I don't care about tags. Automated approach is most welcome, but at least please direct me to some document specifying all the little details about iTunes' metadata requirement for compilations.

    Read the article

  • What are the Windows G: through Z: drives used for?

    - by Tom Wijsman
    In Windows you have a C: drive. The first things labeled beyond that seems to be extra stuff. So my DVD drive is D: and if you put in a USB stick it becomes F:. And then some people also have A: and B:. But then, what and where are G: through Z: drives for? Is it possible to connect so many things to a computer to make them all in use? Or more than them? Would it give a BSOD? Or would this slow down the system somehow? Or what would happen? What if I want to connect even more drives to the computer? Because with the hard drive limits it's more efficient to buy more drives than to buy a single drive with a lot of capacity. Is it possible to create drive letters like 0: through Z: or AA: through ZZ:?

    Read the article

  • Managing many draw calls for dynamic objects

    - by codetiger
    We are developing a game (cross-platform) using Irrlicht. The game has many (around 200 - 500) dynamic objects flying around during the game. Most of these objects are static mesh and build from 20 - 50 unique Meshes. We created seperate scenenodes for each object and referring its mesh instance. But the output was very much unexpected. Menu screen: (150 tris - Just to show you the full speed rendering performance of 2 test computers) a) NVidia Quadro FX 3800 with 1GB: 1600 FPS DirectX and 2600 FPS on OpenGL b) Mac Mini with Geforce 9400M 256mb: 260 FPS in OpenGL Now inside the game in a test level: (160 dynamic objects counting around 10K tris): a) NVidia Quadro FX 3800 with 1GB: 45 FPS DirectX and 50 FPS on OpenGL b) Mac Mini with Geforce 9400M 256mb: 45 FPS in OpenGL Obviously we don't have the option of mesh batch rendering as most of the objects are dynamic. And the one big static terrain is already in single mesh buffer. To add more information, we use one 2048 png for texture for most of the dynamic objects. And our collision detection hardly and other calculations hardly make any impact on FPS. So we understood its the draw calls we make that eats up all FPS. Is there a way we can optimize the rendering, or are we missing something?

    Read the article

  • emulate dual monitor using a second machine

    - by Hemal Pandya
    I have monitors of my two machines side by side and I use them both with a single keyboard/mouse using Synergy+ (now hosted at Google Code) which works great. But is it possible to use the monitor of my secondary machine as the secondary monitor of my primary? I am on XP so from what I understand I cannot just rdc from secondary to this primary. In any case that would be a different session altogether and I would prefer to be able to extend my desktop over the two monitors. Any solutions or suggestions? Thanks in advance.

    Read the article

  • emulate dual monitor using a second machine

    - by Hemal Pandya
    I have monitors of my two machines side by side and I use them both with a single keyboard/mouse using Google Synergy+ which works great. But is it possible to use the monitor of my secondary machine as the secondary monitor of my primary? I am on XP so from what I understand I cannot just rdc from secondary to this primary. In any case that would be a different session altogether and I would prefer to be able to extend my desktop over the two monitors. Any solutions or suggestions? Thanks in advance.

    Read the article

  • Select Data From XML in MS SQL Server (T-SQL)

    - by Doug Lampe
    So you have used XML to give you some schema flexibility in your database, but now you need to get some data out.  What do you do?  The solution is relatively  simple:   DECLARE @iDoc INT /* Stores a pointer to the XML document */ DECLARE @XML VARCHAR(MAX) /* Stores the content of the XML */   set @XML = (SELECT top 1 Xml_Column_Name FROM My_Table where Primary_Key_Column = 'Some Value')   EXEC sp_xml_preparedocument @iDoc OUTPUT, @XML   SELECT * FROM OPENXML(@iDoc,'/some/valid/xpath',2)                      WITH (output_column1_name varchar(50)  'xml_node_name1',                                                     output_column2_name varchar(50)  'xml_node_name2')   EXEC sp_xml_removedocument @iDoc   In this example, the XML data would look something like this:   <some>   <valid>     <xpath>       <xml_node_name1>Value1</xml_node_name1>       <xml_node_name2>Value2</cml_node_name2>     </xpath>   </valid> </some>   The resulting query should give you this:   output_column1_name    output_column2_name ------------------------------------------ Value1                 Value2   Note that in this example we are only looking at a single record at a time.  You could use a cursor to iterate through multiple records and insert the XML data into a temporary table.

    Read the article

  • How to create a filesystem mountable by windows in linux?

    - by wcoenen
    I have attached an external USB disk to my debian gnu/linux system. The disk showed up as device /dev/sdc, and I prepared it like this: created a single partition with fdisk /dev/sdc (and some more commands in the interactive session that follows) formatted the partition with mkfs.msdos /dev/sdc1 If I then attach the USB disk to a Windows XP or Vista system, then no new drive becomes available. The disk and its partition show up fine in the disk managment tool under "computer management", but apparently the file system in the partition is not recognized. How do I create a FAT32 file system which can actually be used in windows? edit: I've given up on this and went with a NTFS file system created by windows. In debian lenny this can be mounted read-write but apparently it requires you to install the "ntfs-3g" package and explicitly pass the -t ntfs-3g option to the mount command.

    Read the article

  • Running System Center Configuration Manager on a Domain Controller

    - by Brent D
    We are a smallish educational network (about 70 clients) with a single server running Windows Server 2008 Enterprise, functioning as both domain controller and file server. The educational pricing for Microsoft Forefront Endpoint Protection 2010 is irresistible as a managed anti-malware solution, but it requires System Center Configuration Manager 2007. I know best practice is not to run System Center Configuration Manager on a domain controller, but it's the only server I have to work with. Will installing SCCM on a domain controller cause problems? What conflicts might I need to take into account when planning deployment?

    Read the article

  • What You Said: How You Set Up a Novice-Proof Computer

    - by Jason Fitzpatrick
    Earlier this week we asked you to share your tips and tricks for setting up a novice-proof computer; read on to see how your fellow readers ensure friends and relatives have a well protected computer. Image available as wallpaper here. If you only listen to a single bit of advice from your fellow readers, let that advice be the importance of separate and non-administrative user accounts. Grant writes: I have two boys, now 8 and 10, who have been using the computer since age 2. I set them up on Linux (Debian first, now Ubuntu) with a limited rights account. They can only make a mess of their own area. Worst case, empty their home directory and let them start over. I have to install software for them, but they can’t break the machine without causing physical damage (hammers, water, etc.) My wife was on Windows, and I was on Debian, and before they had their own, they knew they could only use my computer, and only logged in as themselves. All accounts were password protected, so that was easy to enforce. What Is the Purpose of the “Do Not Cover This Hole” Hole on Hard Drives? How To Log Into The Desktop, Add a Start Menu, and Disable Hot Corners in Windows 8 HTG Explains: Why You Shouldn’t Use a Task Killer On Android

    Read the article

  • Is it possible to get xRandR to see two separate outputs with the nvidia driver?

    - by rumtscho
    I have two monitors, which I have set up with nvidia-settings in Twinview. The result: When I want to do something in xRandR, it does not function. It doesn't report one output per video card head, but a single output mapped to the combined area of both monitors: rumtscho@bradbury:~$ xrandr xrandr: Failed to get size of gamma for output default Screen 0: minimum 3840 x 1440, current 3840 x 1440, maximum 3840 x 1440 default connected 3840x1440+0+0 0mm x 0mm 3840x1440 50.0* Now I promised somebody to help test a driver. The developer is using an open source driver for Intel video cards, and his driver assumes that there is more than one xRandR output, each mapped to a monitor. So I tried rewriting my xorg.conf to somehow get two outputs to show up, but failed. Googling showed that people faced with the xRandR-nvidia problem either stopped using xRandR and achieved what they needed with nvidia-settings, or changed their driver to nouveau. The first is not going to help in my situation, and I am not willing to give up the proprietary driver, because Compiz won't work without it. So does anybody know a way to get nvidia to actually pass on information on outputs to xRandR?

    Read the article

  • How do I delete an Outlook calendar entry (after the meeting is over) without notifying the creator

    - by JabberwockyDecompiler
    I have several meetings during the day and I like to be able to open my calendar and see what is left at a glance so I delete the meetings that I have already completed. If I do this close enough to the meeting time I am asked if I want to notify the creator. If I do this after the meeting has started Outlook automatically sends a notice to the creator that I have declined the meeting. I am only deleting the one instance so it is still in my calendar for the next time, however that creates an email that others must read/delete. I need to be able to remove single occurrences of meetings without automatically sending a notice that I am deleting the entry. NOTE: I am using Outlook 2007, I did not see anything in the Advanced Email Options. NOTE 2: I have seen this happen with Lotus notes as well (Like anyone actually uses that). NOTE 3: There is not a sent message created, only the creator of the calendar event will see the message.

    Read the article

  • Private Git repo using Smart HTTP with LDAP authentification

    - by ALOToverflow
    I've been crawling the interwebz and getting my hands dirty for the last few days, but I can't seem to make it all work together. I managed to get a HTTP repo working with Ubuntu 10.04 over Smart HTTP (pull and push over HTTP) for a single repo. This means that I do the initial setup over SSH to the server (git init --bare) and after that the clients can pull and push to it (git clone http://servername/allgitrepos/repo.git). Unfortunately it's impossible to add a new repo without SSHing to the server and adding it manually) i.e. git push http://servername/allgitrepos/repo2.git (allgitrepos is available for everyone to read-write and execute) would fail talking about git update-server-info (which seems to be a general error message). So far the repository is anonymous, so I would like to authenticate using LDAP and also use the LDAP creds to make the git commit. So, how can I push new repos to the server and how can I use the LDAP creds to make the git commit. Thanks

    Read the article

  • How can I deal with a difficult developer that is holding back the project? [migrated]

    - by ILovePaperTowels
    Our entire project is being held up because of one piece which is being handled by a single developer. When we did finally got the latest version of his code and started reviewing it, we found the code was horrendous! Its a relatively simply workflow, however the code is so complex that it's very difficult to step through and review/debug. The developer responsible has a hard time accepting any kind of criticism, and feels he is more knowledgeable than others members of the team. It's difficult to even talk to him about his development work because it turns into "I know what I'm talking about and you're just wrong!" type of conversation. A request has already been put in to replace this developer but management is not doing anything. This is probably because devs are in short supply where we are, and this is a corporation has a lot of office drama. I'm just one of the developers, not the project manager, however I really want to see this project succeed. What can I do in this sort of situation to try and keep the project on track?

    Read the article

  • How to prevent Network Manager from auto creating network connection profiles with "available to everyone" by default

    - by airtonix
    We have several laptops at work which use Ubuntu 11.10 64bit. I have our Wifi Access Point requiring WPA2-EAP Authentication (backed by a LDAP server). I have the staff using these laptops when doing presentations by using the Guest Account. So by default when you have a wifi card, network manager will display available Wireless Access Points. So the logical course of action for a Novice(tm) user is to single left click the easy to use option in the Network Manager drop down list... At this point the Staff Member (who is logged in with the guest account) expects to just be able to connect and enter any authentication details if required. But because they are using the Guest account, they won't ever have admin permissions (nor do I want them to), and so PolKit kicks in with a request for admin authorisation. I solved this part by modifying the PolKit permissions required to allow all users to create System Network Connections... However, because these Staff members are logging onto the Wifi Access Point with Ldap Credentials and because the Network Manager is now saving those credentials as a System Connection, their password is available for the next guest user session (because system connection profiles are stored in /etc/NetworkManager/system-connections.d/* ). It creates system connections by default because "Available to all users" is ticked by default when you quickly connect to a new wifi access point. I want Network Manager to not tick this by default. This way I can revert the changes I made to Polkit and users network connection profiles will be purged when they log out.

    Read the article

  • Does using VLANs in your network infrastructure cause an appreciable decrease in performance?

    - by Peter Grace
    This is something I've never considered before and wanted the opinions of the experts. We use VLANs day in and day out for various network tasks. My modus operandi is that in general, if something supports VLANs, that port is getting trunked because it just makes a ton of sense if there's even the slightest chance you need to do more than one thing on that single link. As I ponder this, though, I'm wondering whether there's a performance penalty involved with this line of thinking? Is the impact negligible?

    Read the article

  • Memory allocation strategy for the vertex buffers (DirectX 10/11)

    - by Alex
    I have the following question. I write CAD system. So I have a 3D scene and there are many different objects (walls, doors, windows and so on). User can add or delete some objects. The question is: how can I organise the keeping of vertices for all my objects. I can create vertex buffer for every object. But I think drawing/switching from one buffer to another would have performance penalty. Another way - I can create several big buffers for every object type. But I don't understand how to update such buffers. It is too big to update whole buffer (for example buffer for all walls). What I need to do if I want to delete the object from the middle of the buffer? Actually I have the similar question: http://stackoverflow.com/questions/5515700/how-to-properly-update-vertex-buffers-in-directx-10 Most examples I've found work with very static models. Therefore, they tend to create a single vertex buffer with their list of points, and then are just manipulated by matrix transformations. I, on the other hand, will be updating the scene very often.

    Read the article

  • Dedicated Servers: Is one better then two for LAMP pseudo HA setup? [closed]

    - by bikedorkseattle
    Possible Duplicate: How to find web hosting that meets my requirements? I know there are zillions of commentary about hosting out there, but I haven't read much about this. Our current well known host is having too many problems, the hardware we are on it subpar, and I'm ready to leave. A day of downtime can cost as much as our monthly hosting bill. A month of bad performance is just killing us right now, user and google wise. I'm wondering about running two dedicated boxes for LAMP, one running as the primary Nginx/Apache (proxy pass), and the other as the MySQL box. Running a single box scares the bejesus out of me because who knows how long it will take anyone to fix a raid card or whatever. The idea is to set this up using some sort of failover system using pacemaker and heartbeat. If one server goes down the other can take over for the other running both web and db. There are some good articles over at Linode about this. I have a few DBs that are 1GB+ and would like to load them into memory. Because of this, I'm shying away from a Linode HA setup because for the price I could do it with two dedicated like I described. Am I mad or an idiot? What are people out there doing for pseodu high availability good performance setups under $400/month? I'm a webmaster; I do a lot of things none of it that well :)

    Read the article

  • Is it worth moving from Microsoft tech to Linux, NodeJS & other open source frameworks to save money for a start-up?

    - by dormisher
    I am currently getting involved in a startup, I am the only developer involved at the moment, and the other guys are leaving all the tech decisions up to me at the moment. For my day job I work at a software house that uses Microsoft tech on a day to day basis, we utilise .NET, SqlServer, Windows Server etc. However, I realise that as a startup we need to keep costs down, and after having a brief look at the cost of hosting for Windows I was shocked to see some of the prices for a dedicated server. The cheapest I found was £100 a month. Also if the business needs to scale in the future and we end up needing multiple servers, we could end up shelling out £10's of £000's a year in SQL Server / Windows Server licenses etc. I then had a quick look at the price of Linux hosting for a dedicated server and saw the price was waaaaaay lower than windows hosting. One place was offering a machine with 2 cores for less than £20 a month. This got me thinking maybe the way to go is open source on Linux. As I write a lot of Javascript at work (I'm working on a single page backbone app at the moment), I thought maybe NodeJS and a web framework like Express would be cool to use. I then thought that instead of using SQL why not use an open source NoSQL database like MongoDB, which has great support on NodeJS? My only concern is that some of the work the application is going to do is going to be dynamically building images and various other image related stuff, i.e. stuff that is quite CPU heavy - so I'm thinking of maybe writing anything CPU heavy in C++ and consuming it as a module in Node. That's the background - but basically is Linux a good match for: Hosting a NodeJS/Express site? Compiling C++ node modules? Using a NoSQL DB like MongoDB? And is it a good idea to move to these unfamiliar technologies to save money?

    Read the article

  • IRC Services with failover support?

    - by insertjokehere
    I run a single server (call it 'server A') IRC 'network', and thank to the generosity of some friends, I have been given a second server ('server B') that I can run an IRCd on in order to provide redundancy in case server A crashes. This is fine, I can set up a round-robin DNS with the servers linked. The problem I have is what to do about services? Does anyone know of a way to get the services to 'fail over' in case of a server failure? Eg, Server A starts off running the services, but suddenly crashes. Server B detects this and starts its own copy of the services (ideally with the same configuration and data as the services on Server B) One solution that comes it mind is to write a bot that each server runs, that sit in a channel periodically checking if the bot from the other server is in the channel. If it is, then all is well. If not, then failover. I would prefer not to have to code this myself though We are currently using Unreal IRCd and Anope services on Linux

    Read the article

  • Why do some software not get load balanced even when there are multiple cores?

    - by Nav
    While VTune Analyzer was running on a blade server with 8 cores, I observed the cpu useage percentage using mpstat -P ALL 1. mpstat showed me that VTune was taking up 100% of a single core, while all other cores were idle. Why does that happen? Shouldn't the OS (RHEL Server 5.2) automatically distribute load across cores? The same happened when I tried running MATLAB (even after enabling multithreading support in the MATLAB settings). p.s: I'm a developer. Not a sys admin. So felt it better to ask here rather than at serverfault.

    Read the article

  • Using a random string to authenticate HMAC?

    - by mrwooster
    I am designing a simple webservice and want to use HMAC for authentication to the service. For the purpose of this question we have: a web service at example.com a secret key shared between a user and the server [K] a consumer ID which is known to the user and the server (but is not necessarily secret) [D] a message which we wish to send to the server [M] The standard HMAC implementation would involve using the secret key [K] and the message [M] to create the hash [H], but I am running into issues with this. The message [M] can be quite long and tends to be read from a file. I have found its very difficult to produce a correct hash consistently across multiple operating systems and programming languages because of hidden characters which make it into various file formats. This is of course bad implementation on the client side (100%), but I would like this webservice to be easily accessible and not have trouble with different file formats. I was thinking of an alternative, which would allow the use a short (5-10 char) random string [R] rather than the message for autentication, e.g. H = HMAC(K,R) The user then passes the random string to the server and the server checks the HMAC server side (using random string + shared secret). As far as I can see, this produces the following issues: There is no message integrity - this is ok message integrity is not important for this service A user could re-use the hash with a different message - I can see 2 ways around this Combine the random string with a timestamp so the hash is only valid for a set period of time Only allow each random string to be used once Since the client is in control of the random string, it is easier to look for collisions I should point out that the principle reason for authentication is to implement rate limiting on the API service. There is zero need for message integrity, and its not a big deal if someone can forge a single request (but it is if they can forge a very large number very quickly). I know that the correct answer is to make sure the message [M] is the same on all platforms/languages before hashing it. But, taking that out of the equation, is the above proposal an acceptable 2nd best?

    Read the article

  • Why not commit unresolved changes?

    - by Explosion Pills
    In a traditional VCS, I can understand why you would not commit unresolved files because you could break the build. However, I don't understand why you shouldn't commit unresolved files in a DVCS (some of them will actually prevent you from committing the files). Instead, I think that your repository should be locked from pushing and pulling, but not committing. Being able to commit during the merging process has several advantages (as I see it): The actual merge changes are in history. If the merge was very large, you could make periodic commits. If you made a mistake, it would be much easier to roll back (without having to redo the entire merge). The files could remain flagged as unresolved until they were marked as resolved. This would prevent pushing/pulling. You could also potentially have a set of changesets act as the merge instead of just a single one. This would allow you to still use tools such as git rerere. So why is committing with unresolved files frowned upon/prevented? Is there any reason other than tradition?

    Read the article

  • GPU Computing - # of GPUs supported

    - by TehTypoKing
    I currently have a desktop with 6 GPUs ( 3x HD 5970s ) in non-crossfire mode. Unfortunately, it seems that Windows 7 64bit only supports up to 4 GPUs. I have not been able to find a reliable source to deny or confirm this. If windows 7 has this limitation, is there a Linux flavor that supports more than 4 GPUs? In-case you are wondering, this is not for gaming but high-speed single precision computing. With this current setup ( if I can find 6gpu support ) I am looking to reach 13.8 Teraflops. Also, my motherboard does support 3 16x pci-xpress gen2 slots... and I have a 1500w powersupply plugged into a 20amp outlet. Windows is able to detect all 6 cores.. although, 2 of which displays the warning "Drivers failed to load". To recap: - Can windows support 6 GPUs? - If not, does Linux? Thank you.

    Read the article

  • Dediced server for all network functions?

    - by Alan
    I want to set up a fictional network configuration for a school in my neighborhood. They have about 50 computers altogether, 2X20 in computer rooms for students and another 10 scattered around for various professors. They should all access the internet through a dedicated Linux router machine. What they would like is to have domain names for those three computer groups. Lab1, Lab2 and Professors. The computers in Lab2 and Lab1 should have static ip and should all be named by numbers. So there should be 1@Lab1, 2@Lab1.... etc. And the Professors network should have a DHCP, with authentication. Is it an ok solution to have all these functions on a single server? (The one which will be used as a router) Do I have to set a local DNS for domain naming? Do the host names for Lab computers have to be set on the clients, or can they be automatically assigned?

    Read the article

< Previous Page | 346 347 348 349 350 351 352 353 354 355 356 357  | Next Page >