Search Results

Search found 32994 results on 1320 pages for 'second level cache'.

Page 205/1320 | < Previous Page | 201 202 203 204 205 206 207 208 209 210 211 212  | Next Page >

  • unit testing on ARM

    - by NomadAlien
    We are developing application level code that runs on an ARM processor. The BSP (low level code) is being delivered by a 3d party so our code sits just on top of this abstraction layer (code is written in c++). To do unit testing, I assume we will have to mock/stub out the BSP library(essentially abstracting out the HW), but what I'm not sure of is if I write/run the unit test on my pc, do I compile it with for example GCC? Normally we use Realview compiler to compile our code for the ARM. Can I assume that if I compile and run the code with x86 compiler and the unit tests pass that it will also pass when compiled with RealView compiler? I'm not sure how much difference the compiler makes and if you can trust that if the x86 compiled code pass the unit tests that you can also be confident that the Realview compiled code is ok.

    Read the article

  • Why did the team at LMAX use Java and design the architecture to avoid GC at all cost?

    - by kadaj
    Why did the team at LMAX design the LMAX Disruptor in Java but all their design points to minimizing GC use? If one does not want to have GC run then why use a garbage collected language? Their optimizations, the level of hardware knowledge and the thought they put are just awesome but why Java? I'm not against Java or anything, but why a GC language? Why not use something like D or any other language without GC but allows efficient code? Is it that the team is most familiar with Java or does Java possess some unique advantage that I am not seeing? Say they develop it using D with manual memory management, what would be the difference? They would have to think low level (which they already are), but they can squeeze the best performance out of the system as it's native.

    Read the article

  • Find directories that DON'T contain a file

    - by Oli
    Yes, I'm sorting out my music. I've got everything arranged beautifully in the following mantra: /Artist/Album/Track - Artist - Title.ext and if one exists, the cover sits in /Artist/Album/cover.(jpg|png). I want to scan through all the second-level directories and find the ones that don't have a cover. By second level, I mean I don't care if /Britney Spears/ doesn't have a cover.jpg, but I would care if /Britney Spears/In The Zone/ didn't have one. Don't worry about the cover-downloading (that's a fun project for me tomorrow) I only care about the glorious bash-fuiness about an inverse-ish find example.

    Read the article

  • I can't get grub menu to show up during boot

    - by wim
    After trying (and failing) to install better ATI drivers in 11.10, I've somehow lost my grub menu at boot time. The screen does change to the familiar purple colour, but instead of a list of boot options it's just blank solid colour, and then disappears quickly and boots into the default entry normally. How can I get the bootloader back? I've tried sudo update-grub and also various different combinations of resolutions and colour depths in startupmanager application with no success (640x480, 1024x768, 1600x1200, 16 bits, 8 bits, 10 second delay, 7 second delay, 2 second delay...) edit: I have already tried holding down Shift during bootup and it does not seem to change the behaviour. I get the message "GRUB Loading" in the terminal, but then the place where the grub menu normally appears I get a solid blank magenta screen for a while. Here are the contents of /etc/default/grub # If you change this file, run 'update-grub' afterwards to update # /boot/grub/grub.cfg. # For full documentation of the options in this file, see: # info -f grub -n 'Simple configuration' GRUB_DEFAULT=0 GRUB_HIDDEN_TIMEOUT=0 GRUB_HIDDEN_TIMEOUT_QUIET=true GRUB_TIMEOUT=10 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" GRUB_CMDLINE_LINUX=" vga=798 splash" # Uncomment to enable BadRAM filtering, modify to suit your needs # This works with Linux (no patch required) and with any kernel that obtains # the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...) #GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef" # Uncomment to disable graphical terminal (grub-pc only) #GRUB_TERMINAL=console # The resolution used on graphical terminal # note that you can use only modes which your graphic card supports via VBE # you can see them in real GRUB with the command `vbeinfo' #GRUB_GFXMODE=640x480 # Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux #GRUB_DISABLE_LINUX_UUID=true # Uncomment to disable generation of recovery mode menu entries #GRUB_DISABLE_RECOVERY="true" # Uncomment to get a beep at grub start #GRUB_INIT_TUNE="480 440 1"

    Read the article

  • Windows Phone : Nokia et Microsoft organisent de nouvelles formations gratuites partout en France d'avril à mai

    Windows Phone : Nokia et Microsoft organisent de nouvelles formations gratuites Un peu partout en France d'avril à mai Le Team Nokia repart sur les routes de France pour un second Road Show sur Windows Phone 7.5. Pour cette nouvelle édition, deux niveaux de formation seront proposés. Le « Level 1 » permettra de découvrir Mango (interface Metro, Live Tile, notifications?). Elle abordera également l'utilisation du SDK 7.1, le Marketplace et l'AppHub pour la monétisation et la publication des applications. Le « Level 2 » s'adressera lui aux développeurs connaissant déjà les bases du développement pour Mango. Y seront détaillées les bonnes pratiques...

    Read the article

  • Why are there different programming languages [closed]

    - by Velizar Hristov
    I'm not asking about the usefulness of the languages that do exist already: I already know, and agree, that different languages are better for different purposes. However, why don't they just have a single language that can do it all? Why, when C# was created, they didn't keep everything from C and C++ and just add a few things, so that it can be used as both a low-level and high-level language? I see no harm in adding all kinds of commands to a single language that would allow it to be good for everything, and even eliminate the need for all other languages. Someone from another thread said that if there's a flaw in a certain language, its successor might not have it. However, why don't we just update that language to remove the flaw, and/or add anything that's missing? Arrays are different in Java and C#, but why not have them both, just use different commands for them? And so on...

    Read the article

  • Can I create a virtual network interface to connect to a real network device?

    - by michelemarcon
    I have a networked windows pc with 2 network interfaces. The first connects to a lan with ip address 10.1.. The second connects to another lan with ip address 10.2.. Maybe it's a dumb question, however is it possible to virtualize the second network interface, so that the pc can connect to the 2 lans? If necessary, I may switch to linux or paravirtualization. CLARIFICATION: I want to send DHCP broadcast packets on the second lan, but not on the first lan. I want to do it with one single physical network interface. At the moment, I'm not using any virtualization software.

    Read the article

  • Moving StarterSTS to the (Azure) Cloud

    - by Your DisplayName here!
    Quite some people asked me about an Azure version of StarterSTS. While I kinda knew what I had to do to make the move, I couldn’t find the time. Until recently. This blog post briefly documents the necessary changes and design decisions for the next version of StarterSTS which will work both on-premise and on Azure. Provider Fortunately StarterSTS is already based on the idea of “providers”. Authentication, roles and claims generation is based on the standard ASP.NET provider infrastructure. This makes the migration to different data stores less painful. In my case I simply moved the ASP.NET provider database to SQL Azure and still use the standard SQL Server based membership, roles and profile provider. In addition StarterSTS has its own providers to abstract resource access for certificates, relying party registration, client certificate registration and delegation. So I only had to provide new implementations. Signing and SSL keys now go in the Azure certificate store and user mappings (client certificates and delegation settings) have been moved to Azure table storage. The one thing I didn’t anticipate when I originally wrote StarterSTS was the need to also encapsulate configuration. Currently configuration is “locked” to the standard .NET configuration system. The new version will have a pluggable SettingsProvider with versions for .NET configuration as well as Azure service configuration. If you want to externalize these settings into e.g. a database, it is now just a matter of supplying a corresponding provider. Moving between the on-premise and Azure version will be just a matter of using different providers. URL Handling Another thing that’s substantially different on Azure (and load balanced scenarios in general) is the handling of URLs. In farm scenarios, the standard APIs like ASP.NET’s Request.Url return the current (internal) machine name, but you typically need the address of the external facing load balancer. There’s a hotfix for WCF 3.5 (included in v4) that fixes this for WCF metadata. This was accomplished by using the HTTP Host header to generate URLs instead of the local machine name. I now use the same approach for generating WS-Federation metadata as well as information card files. New Features I introduced a cache provider. Since we now have slightly more expensive lookups (e.g. relying party data from table storage), it makes sense to cache certain data in the front end. The default implementation uses the ASP.NET web cache and can be easily extended to use products like memcached or AppFabric Caching. Starting with the relying party provider, I now also provide a read/write interface. This allows building management interfaces on top of this provider. I also include a (very) simple web page that allows working with the relying party provider data. I guess I will use the same approach for other providers in the future as well. I am also doing some work on the tracing and health monitoring area. Especially important for the Azure version. Stay tuned.

    Read the article

  • Deduping your redundancies

    - by nospam(at)example.com (Joerg Moellenkamp)
    Robin Harris of Storagemojo pointed to an interesting article about about deduplication and it's impact to the resiliency of your data against data corruption on ACM Queue. The problem in short: A considerable number of filesystems store important metadata at multiple locations. For example the ZFS rootblock is copied to three locations. Other filesystems have similar provisions to protect their metadata. However you can easily proof, that the rootblock pointer in the uberblock of ZFS for example is pointing to blocks with absolutely equal content in all three locatition (with zdb -uu and zdb -r). It has to be that way, because they are protected by the same checksum. A number of devices offer block level dedup, either as an option or as part of their inner workings. However when you store three identical blocks on them and the devices does block level dedup internally, the device may just deduplicated your redundant metadata to a block stored just once that is stored on the non-voilatile storage. When this block is corrupted, you have essentially three corrupted copies. Three hit with one bullet. This is indeed an interesting problem: A device doing deduplication doesn't know if a block is important or just a datablock. This is the reason why I like deduplication like it's done in ZFS. It's an integrated part and so important parts don't get deduplicated away. A disk accessed by a block level interface doesn't know anything about the importance of a block. A metadata block is nothing different to it's inner mechanism than a normal data block because there is no way to tell that this is important and that those redundancies aren't allowed to fall prey to some clever deduplication mechanism. Robin talks about this in regard of the Sandforce disk controllers who use a kind of dedup to reduce some of the nasty effects of writing data to flash, but the problem is much broader. However this is relevant whenever you are using a device with block level deduplication. It's just the point that you have to activate it for most implementation by command, whereas certain devices do this by default or by design and you don't know about it. However I'm not perfectly sure about that ? given that storage administration and server administration are often different groups with different business objectives I would ask your storage guys if they have activated dedup without telling somebody elase on their boxes in order to speak less often with the storage sales rep. The problem is even more interesting with ZFS. You may use ditto blocks to protect important data to store multiple copies of data in the pool to increase redundancy, even when your pool just consists out of one disk or just a striped set of disk. However when your device is doing dedup internally it may remove your redundancy before it hits the nonvolatile storage. You've won nothing. Just spend your disk quota on the the LUNs in the SAN and you make your disk admin happy because of the good dedup ratio However you can just fall in this specific "deduped ditto block"trap when your pool just consists out of a single device, because ZFS writes ditto blocks on different disks, when there is more than just one disk. Yet another reason why you should spend some extra-thought when putting your zpool on a single LUN, especially when the LUN is sliced and dices out of a large heap of storage devices by a storage controller. However I have one problem with the articles and their specific mention of ZFS: You can just hit by this problem when you are using the deduplicating device for the pool. However in the specifically mentioned case of SSD this isn't the usecase. Most implementations of SSD in conjunction with ZFS are hybrid storage pools and so rotating rust disk is used as pool and SSD are used as L2ARC/sZIL. And there it simply doesn't matter: When you really have to resort to the sZIL (your system went down, it doesn't matter of one block or several blocks are corrupt, you have to fail back to the last known good transaction group the device. On the other side, when a block in L2ARC is corrupt, you simply read it from the pool and in HSP implementations this is the already mentioned rust. In conjunction with ZFS this is more interesting when using a storage array, that is capable to do dedup and where you use LUNs for your pool. However as mentioned before, on those devices it's a user made decision to do so, and so it's less probable that you deduplicating your redundancies. Other filesystems lacking acapability similar to hybrid storage pools are more "haunted" by this problem of SSD using dedup-like mechanisms internally, because those filesystem really store the data on the the SSD instead of using it just as accelerating devices. However at the end Robin is correct: It's jet another point why protecting your data by creating redundancies by dispersing it several disks (by mirror or parity RAIDs) is really important. No dedup mechanism inside a device can dedup away your redundancy when you write it to a totally different and indepenent device.

    Read the article

  • How to setup a fast VPN server

    - by Saif Bechan
    I am trying to set up a VPN that has a fast download speed. The server I have is a linux server and from there I can download 2 megabytes a second. At home I can also download with 2 megabytes a second. All the downloads I do are from the same source, no different server. Now I have set up a VPN connection between my home and the server, and now I am only downloading 64 kilobytes a second! The connection I have created is a PPTP server on a debian machine. Now my question is if it is possible to optimize this connection. Should I maybe switch to OpenVPN, or change operating systems? Or are there some kind of settings to tweak to make the connection optimal. PS. The server I am running is on a XEN node. I have done the proper ip forwarding.

    Read the article

  • Optimal Data Structure for our own API

    - by vermiculus
    I'm in the early stages of writing an Emacs major mode for the Stack Exchange network; if you use Emacs regularly, this will benefit you in the end. In order to minimize the number of calls made to Stack Exchange's API (capped at 10000 per IP per day) and to just be a generally responsible citizen, I want to cache the information I receive from the network and store it in memory, waiting to be accessed again. I'm really stuck as to what data structure to store this information in. Obviously, it is going to be a list. However, as with any data structure, the choice must be determined by what data is being stored and what how it will be accessed. What, I would like to be able to store all of this information in a single symbol such as stack-api/cache. So, without further ado, stack-api/cache is a list of conses keyed by last update: `(<csite> <csite> <csite>) where <csite> would be (1362501715 . <site>) At this point, all we've done is define a simple association list. Of course, we must go deeper. Each <site> is a list of the API parameter (unique) followed by a list questions: `("codereview" <cquestion> <cquestion> <cquestion>) Each <cquestion> is, you guessed it, a cons of questions with their last update time: `(1362501715 <question>) (1362501720 . <question>) <question> is a cons of a question structure and a list of answers (again, consed with their last update time): `(<question-structure> <canswer> <canswer> <canswer> and ` `(1362501715 . <answer-structure>) This data structure is likely most accurately described as a tree, but I don't know if there's a better way to do this considering the language, Emacs Lisp (which isn't all that different from the Lisp you know and love at all). The explicit conses are likely unnecessary, but it helps my brain wrap around it better. I'm pretty sure a <csite>, for example, would just turn into (<epoch-time> <api-param> <cquestion> <cquestion> ...) Concerns: Does storing data in a potentially huge structure like this have any performance trade-offs for the system? I would like to avoid storing extraneous data, but I've done what I could and I don't think the dataset is that large in the first place (for normal use) since it's all just human-readable text in reasonable proportion. (I'm planning on culling old data using the times at the head of the list; each inherits its last-update time from its children and so-on down the tree. To what extent this cull should take place: I'm not sure.) Does storing data like this have any performance trade-offs for that which must use it? That is, will set and retrieve operations suffer from the size of the list? Do you have any other suggestions as to what a better structure might look like?

    Read the article

  • Which creative framework can create these games? [closed]

    - by Rahil627
    I've used a few game frameworks in the past and have run into limitations. This lead me to "creative frameworks". I've looked into many, but I cannot determine the limitations of some of them. Selected frameworks ordered from highest to lowest level: Flash, Unity, MonoGame, OpenFrameworks (and Cinder), SFML. I want to be able to: create a game that handles drawing on an iPad create a game that uses computer vision from a webcam create a multi-device iOS game create a game that uses input from Kinect Can all of the frameworks handle this? What is the highest level framework that can handle all of them?

    Read the article

  • C++ program...overshoots? [migrated]

    - by Zdrok
    I'm decent at C++, but I may have missed some nuance that applies here. Or maybe I completely missed a giant concept, I have no idea. My program was instantly crashing ("blah.exe is not responding") about 1/5 times it was run (other times it ran completely fine) and I tracked the problem down to a constructor for a world class that was called once in the beginning of the main function. Here is the code (in the constructor) that causes the problem: int ii; for(ii=0;ii<=255;ii++) { cout<<"ent "<<ii<<endl; entity_list[ii]=NULL; } for(ii=0;ii<=255;ii++) { cout<<"sec "<<ii<<endl; sector_list[ii]=NULL; } entity_list[0] = new Entity(0,0); entity_list[0]->_world = this; Specifically the second for loop. The cout references are new for the sake of telling where it is having trouble. It would print the entire "ent 1" to "ent 255" and then "sec 1" to "sec 255" and then crash right after, as if it was going for a 257th run through of the second for loop. I set the second for loop to go until "ii<=254" which stopped all crashes. Does C++ code tend to "overshoot" for loops or something? What is causing it to crash at this specific loop seemingly at random? By the way, entity_list and sector_list point to classes called Entity and Sector, respectively, but they are not constructing anything so I didn't think it would be relevant. I also have a forward declaration for the Entity class in a header for this, but since none were being constructed I didn't think it was relevant either. EDIT: It was due to the new Entity line, I assumed wrongly that the fact that altering the for statement to 254 fixed the crashes meant that it had to be there. I still don't understand why the for loop is related, though.

    Read the article

  • Breaking in to Programming

    - by Kevin
    I've noticed that there is a gap between getting formal education in computer science as a student and entry-level/junior programming jobs. Obviously entry-level programming requires that you know some programming but how much do you need to break in? I'm in a QA non-coding role with basically a minor in CS, looking to improve my own programming skills to eventually switch industries. However I'm completely at a loss as to what I should be focusing on learning and am curious as to the steps other people have taken to get experience post-undergrad.

    Read the article

  • PASS: FY10 Actuals Posted

    - by Bill Graziano
    Earlier this year we published preliminary fiscal year 2010 financials to the Governance page on the PASS web site.  Please remember that FY10 runs from July 1st, 2009 through June 30th, 2010 and includes the November 2009 Summit.  We do our fiscal year this way so that the Summit falls earlier in the fiscal year.  The financials we had posted were P&L numbers at the portfolio level.  Prior to this we had posted our detailed budget but only posted the auditors report at the end of each year.  Today we updated our published financials to include: Pre-audit actuals from FY10 at the same level as our budget.  The document has both actuals and budget for FY10 side by side.  This is over 20 pages of detailed financial information covering hundreds of line-items. A letter describing key differences between our budget and actuals.  I walked through each line item where the difference was greater than $25,000 and explained what happened and why. We updated the financial graph going back to 2003 to include FY10. This update should “close the loop” on our financials.  You can now start with the published budget and compare it to the finished financials at the same level of detail.  We also plan to publish the auditor’s report when that is completed -- as we do every year. Overall I’m very happy with how FY10 turned out.  Keep in mind that this was the November 2009 Summit so we were still facing economic challenges.  With all that we were roughly break-even showing a $15,000 profit on $3.9 million of revenue.  I didn’t find anything shocking in reviewing our actual vs. budget but there were a few things that needed explanation.  You can see those in the letter on the governance page. Please keep in mind that these are the actuals from our operating financials.  The auditor may have us make adjustments for depreciation or other financial transactions.  We may also account for certain transactions differently for tax purposes than we do for financial reporting purposes.  I feel these financial statements give you the clearest picture of how our organization spends its money. We were late publishing these this year.  We were working through some tax issues and that delayed our ability to file our final tax forms which delayed this process.  In hindsight I should have published these documents as soon as we had them and not waited for the tax issues.  We’ll do this better in the future. And on a final note, you don’t need to login to view these documents.  If you have any questions you can post them here.  If we get more than a few questions we may see about creating some forums for financial issues on the PASS web site.

    Read the article

  • Asking potential developers to draw UML diagrams during the interview

    - by DotnetDude
    Our interview process currently consists of several coding questions, technical questions and experiences at their current and previous jobs. Coding questions are typically a single method that does something (Think of it as fizzbuzz or reverse a string kind of question) We are planning on introducing an additional step where we give them a business problem and ask them to draw a flowchart, activity, class or a sequence diagram. We feel that our current interview process does not let us evaluate the candidate's thinking at a higher level (which is relevant for architect/senior level positions). To give you some context, we are a mid size software company with around 30 developers in the team. If you have this step in your interview process, how has it improved your interviewing veracity? If not, what else has helped you evaluate the candidates better from a technical perspective.

    Read the article

  • windows 7, slow keyboard

    - by dwight kelly
    I am using Windows 7. the keyboard requires appoximately 1 second of hold down before it sends the letter. the pc wil click once at the 1/2 second then at 1 second the letter will show up. I thought the keyboard was bad and I purchased a new one (usb) and the same thing happened. I pulled out an old ps/2 keyboard, and the same thing. I booted the pc up and went into bios. the keyboard works fine there. I tried unninstalling and reinstalling the drivers, no change please advise

    Read the article

  • Help me come up with my new job title

    - by Seva Alekseyev
    Hi all, I used to be a technical lead in a group of 3-5 programmers. Tech lead's responsibilities here would include thinking of/designing overall solution architecture, coding, refactoring, being the first to dive into the next big thing, reviewing others' code, sitting on customer meetings and answering endless questions from the rest of the team. Now I'm moving on to a branch-level position (in a branch of ~60 people), which entails pretty much the same, sans maybe the coding/refactoring part. Still kinda a tech lead, but the title "tech lead" is already being used and means something else - a group-level tech lead. Please help me come up with a good job title. I need something for my e-mail signature and, eventually, resume.

    Read the article

  • Managing a manager who expects too much

    - by dotnetdev
    I am in 3rd line support. We do a lot of bug fixing (although we should be doing other stuff). Quite often, we get systems which are so badly designed and configured (at the server OS level and software level) that they are beyond repair. Yet my manager, even though he was a dev, may swear when I tell him the system is unrepairable (As the person who does our server work gives an opinion that it's FUBAR). However, he still expects it to work without a rebuild. How can I make it work like that when a guy with a million years more experience says the system needs a rebuild?

    Read the article

  • Configure FTP Server with two different IP addresses on different subnets and separate NICs

    - by Luke
    I have an FTP server that's on a low bandwidth connection. We want to set it up with a second IP address on a much higher bandwidth connection. I set up the second interface with a static IP address on the faster connection. This unfortunately does not work. I can verify that the second IP address works perfectly when I disable the first IP address. What do I need to do to get two separate interface IP addresses on different subnets working on the same server?

    Read the article

  • Multiple SSL on same IP [closed]

    - by kadourah
    Possible Duplicate: Multiple SSL domains on the same IP address and same port? I have the following situation: first domain: test.domain.com IP: 1.2.3.4 Port: 443 SSL: Purchased from godaddy and specific to that domain Works fine no issues. I would like to add another site: test2.domain.com IP: the same Port: can be different SSL: different since I can't use the SSL above because it's specific to the site above. Now, when I add the HTTPS binding to the second site with IP:Port combination it appears to always load the first SSL ignoring the second certificate. How can I add second SSL binding to the same IP using a "different" certificate? Can this be done?

    Read the article

  • Get script for every action in SQL Server Management Studio

    I am always conscious to keep a record of all operations performed on my database servers. Operations through T-SQL in an SSMS query pane can easily be saved in query files. For table modifications through SSMS designer I have predefined setting to generate T-SQL scripts. However there are numerous database and server level tasks that I use the SSMS GUI and I would like to have a script of these changes for later reference. Examples of such actions through the SSMS GUI are backup/restore, changing compatibility level of a database, manipulating permissions, dealing with database or log files or creating/manipulating any login/user. I am looking for any way to generate T-SQL code for such actions, so that it may be kept for later reference

    Read the article

  • Game Asset Storage: Archive vs Individual files

    - by David Colson
    As I am in the process of creating a 3D c++ game and I was wondering what would be more beneficial when dealing with game assets with regards to storage. I have seen some games have a single asset file compressed with everything in it and other with lots of little compressed files. If I had lots of individual files I would not need to load a large file at once and use up memory but the code would have to go about file seeking when the level loads to find all the correct files needed. There is no file seeking needed when dealing with one large file, but again, what about all the assets not currently needed that would get loaded with the one file? I could also have an asset file for each level, but then how do I deal with shared assets This has been bothering me for a while so tell me what other advantages and disadvantages are there to either way of doing things.

    Read the article

  • Convincing Upper Management the need of larger monitors for Developers

    - by The Rubber Duck
    The company I work for has recently hired on several developers, and there are a limited number of monitors to go around. There are two types in the office - a standard 15" (thankfully flatscreen) and a widescreen 23". No developer has a machine capable of a dual monitor setup, and the largest monitors went to the people who got here first. Three or four new senior level developers only have a 15" monitor to work on. To make matters worse, there are perhaps a total of 25-30 DBAs/Testers/Admin types in the company who all have dual screen 23" setups. We have brought the issue to management, and they refuse to take away large monitors from people who have been here for years for the sake of new employees, even if they are senior level. We have pitched the idea of testers sacrificing a large monitor for one of our small ones, but they won't go for that either. What can I say to management to illustrate the need of monitors for developers?

    Read the article

  • Windows server 2008 - Access Based Enumeration (ABE) not working correctly

    - by Napster100
    I have a folder shared with permissions of only one user account, admin account and admin group having access to it, but when I open the shared area from a second user account which dose not have access to it, the folder is still visible to the second account despite ABE being enabled on it and all other parent directories/folders and even the the drive. The user can't access the shared folder (which is what I want), but I'd like the folder to also be invisible to that user, just to make it look cleaner and theirs no confusion between what they can access and what they cannot. How would I stop the folder appearing for users who don't have permissions to use it? Thanks in advanced. EDIT: I've just added the second user account to the permissions list but denied it access so that the account definitely has no permissions to access it in any way but that's still not hiding it.

    Read the article

< Previous Page | 201 202 203 204 205 206 207 208 209 210 211 212  | Next Page >