Search Results

Search found 54118 results on 2165 pages for 'default value'.

Page 603/2165 | < Previous Page | 599 600 601 602 603 604 605 606 607 608 609 610  | Next Page >

  • Microsoft Keyboard(8000) play button opens Windows Media Player rather than Zune on Windows 7

    - by Chance
    Im currently using the Microsoft 8000 keyboard on Windows 7 and before installing Microsoft's Keyboard application, it was opening Zune and playing music with the play button. After installation, it is now opening Windows Media Player rather than Zune. If I have Zune open and press play, it will open WMP and upon pressing it again, will kick off the play command in Zune. I've checked the default programs in the control panel and have set Zune as default for all available but that hasn't changed anything. Has anyone run into this before? I'm a bit stumped as googling does not produce any relevant results. Thanks!

    Read the article

  • Where does the temporary flash file get stored when I am viweing from Firefox

    - by Nishant
    I am watching a lecture and it seems to be adobe flash ...I wanna save this video that I am viewing . The website I am checking is http://cs75.tv/2009/fall/ . I am using Firefox . Dont know if this info helps , but .... My about:cache result is this . Memory cache device Number of entries: 212 Maximum storage size: 13312 KiB Storage in use: 8087 KiB Inactive storage: 6819 KiB List Cache Entries Disk cache device Number of entries: 3224 Maximum storage size: 500000 KiB Storage in use: 26066 KiB Cache Directory: C:\Documents and Settings\nvarm\Local Settings\Application Data\Mozilla\Firefox\Profiles\d74svniy.default\Cache List Cache Entries Offline cache device Number of entries: 0 Maximum storage size: 512000 KiB Storage in use: 0 KiB Cache Directory: C:\Documents and Settings\nvarm\Local Settings\Application Data\Mozilla\Firefox\Profiles\d74svniy.default\OfflineCache List Cache Entries

    Read the article

  • Where does the temporary flash file get stored when I am viweing from Firefox

    - by Nishant
    I am watching a lecture and it seems to be adobe flash ...I wanna save this video that I am viewing . The website I am checking is http://cs75.tv/2009/fall/ . I am using Firefox . Dont know if this info helps , but .... My about:cache result is this . Memory cache device Number of entries: 212 Maximum storage size: 13312 KiB Storage in use: 8087 KiB Inactive storage: 6819 KiB List Cache Entries Disk cache device Number of entries: 3224 Maximum storage size: 500000 KiB Storage in use: 26066 KiB Cache Directory: C:\Documents and Settings\nvarm\Local Settings\Application Data\Mozilla\Firefox\Profiles\d74svniy.default\Cache List Cache Entries Offline cache device Number of entries: 0 Maximum storage size: 512000 KiB Storage in use: 0 KiB Cache Directory: C:\Documents and Settings\nvarm\Local Settings\Application Data\Mozilla\Firefox\Profiles\d74svniy.default\OfflineCache List Cache Entries

    Read the article

  • Where does the temporary flash file get stored when I am viweing from Firefox

    - by Nishant
    I am watching a lecture and it seems to be adobe flash ...I wanna save this video that I am viewing . The website I am checking is http://cs75.tv/2009/fall/ . I am using Firefox . Dont know if this info helps , but .... My about:cache result is this . Memory cache device Number of entries: 212 Maximum storage size: 13312 KiB Storage in use: 8087 KiB Inactive storage: 6819 KiB List Cache Entries Disk cache device Number of entries: 3224 Maximum storage size: 500000 KiB Storage in use: 26066 KiB Cache Directory: C:\Documents and Settings\nvarm\Local Settings\Application Data\Mozilla\Firefox\Profiles\d74svniy.default\Cache List Cache Entries Offline cache device Number of entries: 0 Maximum storage size: 512000 KiB Storage in use: 0 KiB Cache Directory: C:\Documents and Settings\nvarm\Local Settings\Application Data\Mozilla\Firefox\Profiles\d74svniy.default\OfflineCache List Cache Entries

    Read the article

  • Internet Explorer defaults to 64-bit version

    - by Tim Long
    My IE8 has suddenly started defaulting to the 64-bit version. I have no idea how or why this has happened, but I suspect it might be linked to the Browser Choice Screen that Microsoft was recently forced to display by EU law. However, many web sites will not display correctly in IE8 x64 (eg. sites that use Adobe Flash or Microsoft Silverlight). I have the 32-bit version of IE pinned to my taskbar and if I launch it manually, everything is fine. But when I click on a URL from another program and IE is not already running, then the 64-bit version gets launched. This really messes with programs like BBC iPlayer which rely heavily on Adbobe Air and Flash. So, how do I get IE8 32-bit version to be the default version again? I've tried using the "default programs" control panel and that doesn;t make any difference (in fact, it doesn't give the choice between x84 and x64 versions, it just lists "internet explorer").

    Read the article

  • How do you run XBMC on nvidia dual screen and stop it from taking over the keyboard and mouse?

    - by Paul Swartout
    I have set up dual screen under Ubuntu 12.04. I have a GeForce 8500 GT and have used the nVidia control panel to set up dual screen in "Separate screen mode". Here's the resulting xorg.conf # nvidia-settings: X configuration file generated by nvidia-settings # nvidia-settings: version 295.33 (buildd@zirconium) Fri Mar 30 13:38:49 UTC 2012 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 0 0 Screen 1 "Screen1" RightOf "Screen0" InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" Option "Xinerama" "0" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" # HorizSync source: edid, VertRefresh source: edid Identifier "Monitor0" VendorName "Unknown" ModelName "Maxdata/Belinea B1925S1W" HorizSync 31.0 - 83.0 VertRefresh 56.0 - 75.0 Option "DPMS" EndSection Section "Monitor" # HorizSync source: builtin, VertRefresh source: builtin Identifier "Monitor1" VendorName "Unknown" ModelName "CRT-1" HorizSync 28.0 - 55.0 VertRefresh 43.0 - 72.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce 8500 GT" BusID "PCI:1:0:0" Screen 0 EndSection Section "Device" Identifier "Device1" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce 8500 GT" BusID "PCI:1:0:0" Screen 1 EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "TwinView" "0" Option "metamodes" "CRT-0: nvidia-auto-select +0+0" SubSection "Display" Depth 24 EndSubSection EndSection Section "Screen" # Removed Option "metamodes" "CRT-1: 1280x768 +0+0" Identifier "Screen1" Device "Device1" Monitor "Monitor1" DefaultDepth 24 Option "TwinView" "0" Option "metamodes" "CRT-1: 1360x768_60 +0+0" SubSection "Display" Depth 24 EndSubSection EndSection All well and good and I have a nice blank XWindow displayed on my TV (the 2nd monitor). I then fire up XBMC from a terminal on the PC monitor using DISPLAY=:0.1 xbmc XBMC fires up quite nicely on the TV however I can no longer use the main PC monitor / mouse / keyboard as XBMC on the TV screen seems to have focus. I was hoping to have XBMC running on the TV and let the kids use the MCE remote whilst I get on with my work on the PC monitor. Does anyone have any idea how to overcome this? I'm presuming there's some xorg.conf fun and games needed but I've no idea where to start to be honest.

    Read the article

  • Start vino-server (VNC) before login on Linux CentOS

    - by Dr. Gianluigi Zane Zanettini
    I'm using the default vino-server package to access my CentOS 6 workstation via VNC. It works ok, but only AFTER I locally login on the workstation. I need to have vino-server start BEFORE the login, right at the Gnome login screen where I choose username and password. Due to personal reasons, I need to use Vino and not vnc-server or any other packages. I already tried to insert /usr/libexec/vino-server & in /etc/gdm/Init/Default but this didn't solve the issue.

    Read the article

  • Cannot Boot, How to recover

    - by Kendor
    Am running 11.10 64-bit with Gnome-shell. Something happened late Friday whereby my machine never gets to the login screen. I do get to an Ubuntu splash logo, after that I get a text screen that it hangs on. The screen is referring to issues with mounting various network resources, including VMWare and also some references to my NAS that are in fstab. If I hit "esc" I can get to the GRUB menu and into recovery console. If I try to do a file system check, I run into a similar error screen that I see when trying to boot normally. A possible clue here is that during my last good session I made some mods to the /etc/hosts file to reference another system which I'm connecting to with Synergy. I don't believe I have hardware issues as I'm able to boot properly with a Live USB and connect to my network/Internet. A few more tidbits. I have regular Dejadups backups on my NAS. I have a good Clonezilla whole drive image which is 4-6 weeks old.. My home is encrypted. I thought I'd try blowing away my hosts file via live USB, but when I mounted the hard drive everything was read-only and I couldn't figure out how to replace it. P.S. I logged in via CLI and modded the host file to remove the entry I'd made, to no avail. System continue to gets stuck on the following: CIFS VFS: default security mechanism requested. The default security mechanism will be upgraded from ntlm to ntlmv2 in kernel version 3.1s Would love some sober advice on how to attack this.

    Read the article

  • Prevalence of WMI enabled in real Windows Server networks

    - by TripleAntigen
    Hi I would like to get opinions from systems administrators, on how common it is that WMI functionality is actually enabled in corporate networks. I am writing an enterprise network application that could benefit from the features of WMI, but I noted after creating a virtual network based on Server 2008 R2, that WMI seems to be disabled by default. Do systems admins in practical corporate networks enable WMI? Or is it usually disabled for security purposes? What is it used for if it is enabled? Thanks for any advice! MORE INFO: I should have said, I really need to be able to query the workstations but I understand that by default the WMI ports on Win7 and XP firewalls (at least) are disallowed, so do you use some sort of group policy or other method to leave a hole open for WMI on the workstations? Or is just the servers that are of interest? Thanks for the responses!!

    Read the article

  • NHibernate Pitfalls: Cascades

    - by Ricardo Peres
    This is part of a series of posts about NHibernate Pitfalls. See the entire collection here. For entities that have associations – one-to-one, one-to-many, many-to-one or many-to-many –, NHibernate needs to know what to do with their related entities, in three particular moments: when saving, updating or deleting. In particular, there are two possible behaviors: either ignore these related entities or cascade changes to them. NHibernate allows setting the cascade behavior for each association, and the default behavior is not to cascade (ignore). The possible cascade options are: None Ignore, this is the default Save-Update If the entity is being saved or updated, also save any related entities that are either not saved or have been modified and associate these related entities to the root entity. Generally safe Delete If the entity is being deleted, also delete the related entities. This is only useful for parent-child relations Delete-Orphan Identical to Delete, with the addition that if once related entity is removed from the association – orphaned –, also delete it. Also only for parent-child All Combination of Save-Update and Delete, usually that’s what we want (for parent-child relations, of course) All-Delete-Orphan Same as All plus delete any related entities who lose their relationship In summary, Save-Update is generally what you want in most cases. As for the Delete variations, they should only be used if the related entities depend on the root entity (parent-child), so that deleting the root entity and not their related entities would result in a constraint violation on the database.

    Read the article

  • How do I make virtual host DirectoryIndex file appear in the url?

    - by Bob Flemming
    I have setup a virtual host which specifies a default file to load when the URL is called. The problem I have is that I need that default DirectoryIndex file to appear in the URL. So when I go to: www.mysite.co.uk, I want www.mysite.co.uk/app.php to appear in the URL. How can I achieve this using my virtual host configuration within my apache.conf file? Here is my current code: <VirtualHost *:80> ServerName *.mysite.co.uk DocumentRoot "/var/www/html/mysite/web/" DirectoryIndex app.php </VirtualHost>

    Read the article

  • [iptables] Why do 'iptables -A OUTPUT -j REJECT' at the end of the chain OUTPUT override the previous rules??

    - by Serge
    Those are my IPTABLES rules: iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A OUTPUT -p tcp --dport 22 -j ACCEPT iptables -A OUTPUT -p udp --dport 22 -j ACCEPT iptables -A OUTPUT -p tcp --dport 80 -j ACCEPT iptables -A OUTPUT -p udp --dport 53 -j ACCEPT iptables -A OUTPUT -p tcp --dport 53 -j ACCEPT iptables -A INPUT -p tcp --dport 80 -j ACCEPT iptables -A INPUT -p tcp -m tcp --dport 22 -m state --state NEW -m recent --set --name DEFAULT --rsource iptables -A INPUT -p tcp -m tcp --dport 22 -m state --state NEW -m recent --update --seconds 180 --hitcount 4 --name DEFAULT --rsource -j DROP iptables -A INPUT -p tcp -m state --state NEW --dport 22 -j ACCEPT iptables -A OUTPUT -j REJECT iptables -A INPUT -j REJECT iptables -A FORWARD -j REJECT Im using a remote ssh conetion to set them up, but after i set: iptables -A OUTPUT -j REJECT My connection get lost. I have read all the documentation for Iptables and i can figure out anything, the global Rejects for INPUT work well because i can access to the web page but i get a timeout for ssh. Any idea? Thanks

    Read the article

  • How do I search my mail and Office Communicator conversations from inside Outlook?

    - by Anthony Mastrean
    I am running Office Communicator 2007 R2 and Outlook 2010 on a Microsoft Exchange Server 2010. I am storing my conversation history from Communicator in Exchange in the default folder, "Conversation History". I'm using the conversation view in Outlook. And I have a Gmail-like macro to archive my conversations to an "Archive" folder. I want to search all my mail and conversations at once. By default, Outlook is searching in the current folder only. I tried creating a rule to move the conversations to my Archive folder, but couldn't configure it correctly.

    Read the article

  • Active Directory: trouble adding new DC

    - by ethrbunny
    I have a domain with 3 DCs. One is starting to fail so I brought up a new one. All are running Win 2003. Problem: there appear to be replication issues between the 4 machines but I can't figure out what's causing this. All are registered with the DNS as identically as I can make them. How do I know there is a problem? Nagios is telling me that the other 3 DCs are having KCCEvent errors and the new machine is reporting "failed connectivity" errors. Doing dcdiag on the new machine reports: the host could not be resolved to an IP address. This seems crazy as I log into it using the DNS name. I can ping it from the other three machines using this DNS name as well. repadmin /showreps from the new machine says its seeing the other 3 machines. Doing the same from one of the older machines doesn't show the new machine. I've tried netdiag /repair numerous times. No luck. There are no firewalls running on any of the machines. If I look at Domain info via MMC (on the new machine) it appears that all the information is current. Users, computers, DCs.. its all there. Im puzzled as to what step(s) I've missed in adding this new machine. Suggestions? EDIT: dcdiag from non-working: C:\Documents and Settings\Administrator.BME>dcdiag Domain Controller Diagnosis Performing initial setup: Done gathering initial info. Doing initial required tests Testing server: Default-First-Site-Name\YELLOW Starting test: Connectivity The host 312ce6ea-7909-4e15-aff6-45c3d1d9a0d9._msdcs.server.edu could not be resolved to an IP address. Check the DNS server, DHCP, server name, etc Although the Guid DNS name (312ce6ea-7909-4e15-aff6-45c3d1d9a0d9._msdcs.server.edu) couldn't be resolved, the server name (yellow.server.edu) resolved to the IP address (10.127.24.79) and was pingable. Check that the IP address is registered correctly with the DNS server. ......................... YELLOW failed test Connectivity Doing primary tests Testing server: Default-First-Site-Name\YELLOW Skipping all tests, because server YELLOW is not responding to directory service requests Running partition tests on : Schema Starting test: CrossRefValidation ......................... Schema passed test CrossRefValidation Starting test: CheckSDRefDom ......................... Schema passed test CheckSDRefDom Running partition tests on : Configuration Starting test: CrossRefValidation ......................... Configuration passed test CrossRefValidation Starting test: CheckSDRefDom ......................... Configuration passed test CheckSDRefDom Running partition tests on : bme Starting test: CrossRefValidation ......................... bme passed test CrossRefValidation Starting test: CheckSDRefDom ......................... bme passed test CheckSDRefDom Running enterprise tests on : server.edu Starting test: Intersite ......................... server.edu passed test Intersite Starting test: FsmoCheck ......................... server.edu passed test FsmoCheck dcdiag from working: P:\>dcdiag Domain Controller Diagnosis Performing initial setup: Done gathering initial info. Doing initial required tests Testing server: Default-First-Site-Name\AD1 Starting test: Connectivity ......................... AD1 passed test Connectivity Doing primary tests Testing server: Default-First-Site-Name\AD1 Starting test: Replications ......................... AD1 passed test Replications Starting test: NCSecDesc ......................... AD1 passed test NCSecDesc Starting test: NetLogons ......................... AD1 passed test NetLogons Starting test: Advertising ......................... AD1 passed test Advertising Starting test: KnowsOfRoleHolders ......................... AD1 passed test KnowsOfRoleHolders Starting test: RidManager ......................... AD1 passed test RidManager Starting test: MachineAccount ......................... AD1 passed test MachineAccount Starting test: Services ......................... AD1 passed test Services Starting test: ObjectsReplicated ......................... AD1 passed test ObjectsReplicated Starting test: frssysvol ......................... AD1 passed test frssysvol Starting test: frsevent ......................... AD1 passed test frsevent Starting test: kccevent ......................... AD1 passed test kccevent Starting test: systemlog ......................... AD1 passed test systemlog Starting test: VerifyReferences ......................... AD1 passed test VerifyReferences Running partition tests on : Schema Starting test: CrossRefValidation ......................... Schema passed test CrossRefValidation Starting test: CheckSDRefDom ......................... Schema passed test CheckSDRefDom Running partition tests on : Configuration Starting test: CrossRefValidation ......................... Configuration passed test CrossRefValidation Starting test: CheckSDRefDom ......................... Configuration passed test CheckSDRefDom Running partition tests on : bme Starting test: CrossRefValidation ......................... bme passed test CrossRefValidation Starting test: CheckSDRefDom ......................... bme passed test CheckSDRefDom Running enterprise tests on : server.edu Starting test: Intersite ......................... server.edu passed test Intersite Starting test: FsmoCheck ......................... server.edu passed test FsmoCheck P:\>

    Read the article

  • Should accessible members of an internal class be internal too?

    - by Jeff Mercado
    I'm designing a set of APIs for some applications I'm working on. I want to keep the code style consistent in all the classes I write but I've found that there are a few inconsistencies that I'm introducing and I don't know what the best way to resolve them is. My example here is specific to C# but this would apply to any language with similar mechanisms. There are a few classes that I need for implementation purposes that I don't necessarily want to expose in the API so I make them internal whereever needed. Generally what I would do is design the class as I normally would (e.g., make members public/protected/private where necessary) and change the visibility level of the class itself to internal. So I might have a few classes that look like this: internal interface IMyItem { ItemSet AddTo(ItemSet set); } internal class _SmallItem : IMyItem { private readonly /* parameters */; public _SmallItem(/* small item parameters */) { /* ... */ } public ItemSet AddTo(ItemSet set) { /* ... */ } } internal abstract class _CompositeItem: IMyItem { private readonly /* parameters */; public _CompositeItem(/* composite item parameters */) { /* ... */ } public abstract object UsefulInformation { get; } protected void HelperMethod(/* parameters */) { /* ... */ } } internal class _BigItem : _CompositeItem { private readonly /* parameters */; public _BigItem(/* big item parameters */) { /* ... */ } public override object UsefulInformation { get { /* ... */ } } public ItemSet AddTo(ItemSet set) { /* ... */ } } In another generated class (part of a parser/scanner), there is a structure that contains fields for all possible values it can represent. The class generated is internal too but I have control over the visibility of the members and decided to make them internal as well. internal partial struct ValueType { internal string String; internal ItemSet ItemSet; internal IMyItem MyItem; } internal class TokenValue { internal static int EQ(ItemSetScanner scanner) { /* ... */ } internal static int NAME(ItemSetScanner scanner, string value) { /* ... */ } internal static int VALUE(ItemSetScanner scanner, string value) { /* ... */ } //... } To me, this feels odd because the first set of classes, I didn't necessarily have to make some members public, they very well could have been made internal. internal members of an internal type can only be accessed internally anyway so why make them public? I just don't like the idea that the way I write my classes has to change drastically (i.e., change all uses of public to internal) just because the class is internal. Any thoughts on what I should do here? It makes sense to me that I might want to make some members of a class declared public, internal. But it's less clear to me when the class is declared internal.

    Read the article

  • Exchange 2007 Email Address Policies

    - by Ryan Migita
    We have recently upgraded to Exchange 2007 (from 2003) and have noticed the change from recipient policies to email address policies. We have two separate domains (let's call them domaina.com and domainb.com) we receive email for, have email address policies and both email address policies are not applied. In our Exchange 2003 environment, domaina.com was the default email address when we created new mailboxes and due to the migration, domainb is the default (and its email address policy is a higher priority). Now, when we create a new mailbox (or edit existing ones), the primary email address becomes domainb.com. Now the question is, is this as simple as putting the email address policies in the correct order? Do I have to apply both policies? What effect will the above changes make to existing mailboxes? Since we do not have any conditions set on the policies, I assume prior to making these changes, I should force all domainb mailboxes to not automatically update email address based on policy? Thanks in advance!

    Read the article

  • Sharding / indexing strategy for multi-faceted search

    - by Graham
    I'm currently thinking about our database structure and how we modify it for scale. Specifically, we're thinking about using ElasticSearch to provide our search functionality. One common pattern with ElasticSearch seems to be the 'user-routing' pattern; that is, using routing to ensure that any one user's data resides on the same shard. This is great for client-specific search e.g. Gmail. Our application has a constraint such that any user will have a maximum of a few thousand documents, so this pattern seems like a good candidate. However, our search needs to work across all users, as well as targeting a specific user (so I might search my content, Alice's content, or all content). Similarly, we need to provide full-text search across any timeframe; recent months to several years ago. I'm thinking of combining the 'user-routing' and 'index-per-time-interval' patterns: I create an index for each month By default, searches are aliased against the most recent X months If no results are found, we can search against previous X months As we grow, we can reduce the interval X Each document is routed by the user ID So, this should let us do the following: search by user. This will search all indeces across 1 shard search by time. This will search ~2 indeces (by default) across all shards Is this a reasonable approach, considering we may scale to multi-million+ documents? Or should I be denormalizing the data somehow, so that user searches are performed on a totally seperate index from date searches? Thanks for any pros-cons of the above scenario.

    Read the article

  • How secure is Microsoft 2007's encryption?

    - by ericl42
    I've read some various articles about Microsoft's encryption, and from what I gather, 2007 is secure using all default options due to it using AES, and 2000 and 2003 can be configured secure by changing the default algorithm to AES. I was wondering if anyone else has read any other articles or know of any specific vulnerabilities involved with how they implement the encryption. I would like to be able to tell users that they can use this to send semi sensitive documents as long as they use AES and a strong password. Thanks for the information.

    Read the article

  • Virtual hosting in Varnish with individual vcl files for configuration

    - by Michael Sørensen
    I wish to use varnish to put in front of an apache and a tomcat on the same server. Depending on the ip requested, it goes to a different backend. This works. Now for most of the sites the default varnish logic will work just fine. However for some specific sites I wish to use custom VCL code. I can test for host name and include config files for the specific domains, but this only works inside the individual methods recv etc. Is there a way to include a complete set of instructions, in one file, per domain, without having to manage separate files for subdomain_recv, subdomain_fetch etc? And preferably without running seperate instances of varnish. When I try to include a file on the "root level" of default.vcl, I get a compilation error. Best regards, Michael

    Read the article

  • What causes the iOS OpenGLES driver to allocate extra memory?

    - by Martin Linklater
    I'm trying to optimize the memory usage of our iOS game and I'm puzzled about when/why the iOS GLES driver allocates extra memory at runtime... When I run our game through Instruments with the OpenGL ES Driver instrument the gartUsedBytes value can fluctuate quite wildly. We preload all our textures and build the buffer objects up front, so it's not the game engine requesting extra memory from GL. Currently we are manually requesting around 50MB of GL memory, yet the gartUsedBytes value sits at around 90MB most of the time, peaking at 125MB from time to time. It seems to be linked to what you are rendering that frame - our PVS only submits VBO's for visible meshes. Can anyone shed some light on what the driver is doing in the background ? Like I said earlier, all our game engine allocations are done on level load, so in theory there shouldn't be any fluctuation on GL memory usage while the level is running. Thanks.

    Read the article

  • 10 System LAN latency with ADSL modem as gateway

    - by itsoft3g
    Recently I expanded LAN in my office from 3 to 10 computers. Structure star topology, one ADSL Modem connected to One Switch which is again connected to 10 computers. Also we have Wifi device Netgear which is connected from switch. ADSL Modem acts as the DHCP Server, all the system will have default gateway IP (ADSL Modem's IP) Network latency is now become very high, All the chat severs disconnect often like google talk, skype etc, also internet become very very slow. when all the computer turned on. We have 4 Mbps Download and 100 Kbps upload Net speed. Its look like ADSL Modem cannot able to handle all the connections. I tried to setup a system as default gateway which will connect to modem, not sure how to do this. Please advice on this.

    Read the article

  • how to setup duel monitors an xorg.conf

    - by MrMonty
    # nvidia-settings: X configuration file generated by nvidia-settings # nvidia-settings: version 295.33 (buildd@allspice) Fri Mar 30 15:25:24 UTC 2012 Section "ServerLayout" # Removed Option "Xinerama" "1" # Removed Option "Xinerama" "0" # Removed Option "Xinerama" "1" Identifier "Layout0" Screen 0 "Screen0" 0 0 InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" Option "Xinerama" "0" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" # HorizSync source: edid, VertRefresh source: edid Identifier "Monitor1" VendorName "Unknown" ModelName "Ancor Communications Inc VE247" HorizSync 30.0 - 83.0 VertRefresh 50.0 - 76.0 Option "DPMS" EndSection Section "Monitor" # HorizSync source: edid, VertRefresh source: edid Identifier "Monitor0" VendorName "Unknown" ModelName "Ancor Communications Inc VE247" HorizSync 30.0 - 83.0 VertRefresh 50.0 - 76.0 Option "DPMS" EndSection Section "Device" Identifier "Device1" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "Quadro FX 1500" BusID "PCI:1:0:0" Screen 1 EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "Quadro FX 1500" EndSection Section "Screen" Identifier "Screen1" Device "Device1" Monitor "Monitor1" DefaultDepth 24 Option "TwinView" "0" Option "TwinViewXineramaInfoOrder" "DFP-1" Option "metamodes" "DFP-1: 1280x1024 +0+0" SubSection "Display" Depth 24 EndSubSection EndSection Section "Screen" # Removed Option "TwinView" "0" # Removed Option "metamodes" "DFP-0: 1280x1024 +0+0" # Removed Option "TwinView" "1" # Removed Option "metamodes" "DFP-0: 1280x1024 +0+0, DFP-1: 1280x1024 +1280+0" # Removed Option "TwinView" "0" # Removed Option "metamodes" "DFP-0: 1280x1024 +0+0" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "TwinView" "1" Option "TwinViewXineramaInfoOrder" "DFP-0" Option "metamodes" "DFP-0: 1280x1024 +0+0, DFP-1: 1280x1024 +1280+0; DFP-1: 1280x1024_60 +0+0" SubSection "Display" Depth 24 EndSubSection EndSection Section "Extensions" Option "Composite" "Disable" EndSection thats my file!

    Read the article

  • most efficient AABB vs Ray collision algorithms

    - by Asher Einhorn
    Is there a known 'most efficient' algorithm for AABB vs Ray collision detection? I recently stumbled accross Arvo's AABB vs Sphere collision algorithm, and I am wondering if there is a similarly noteworthy algorithm for this. One must have condition for this algorithm is that I need to have the option of querying the result for the distance from the ray's origin to the point of collision. having said this, if there is another, faster algorithm which does not return distance, then in addition to posting one that does, also posting that algorithm would be very helpful indeed. Please also state what the function's return argument is, and how you use it to return distance or a 'no-collision' case. For example, does it have an out parameter for the distance as well as a bool return value? or does it simply return a float with the distance, vs a value of -1 for no collision? (For those that don't know: AABB = Axis Aligned Bounding Box)

    Read the article

  • OTN Virtual Technology Summit - July 9 - Middleware Track

    - by OTN ArchBeat
    The Architecture of Analytics: Big Time Big Data and Business Intelligence This four-session track, part of the free OTN Virtual Technology Summit on July 9, will present a solution architect's perspective on how business intelligence products in Oracle's Fusion Middleware family and beyond fit into an effective big data architecture, offering insight and expertise from Oracle ACE Directors and product team experts specializing in business Intelligence to help you meet your big data business intelligence challenges. Register now! Sessions Oracle Big Data Appliance Case Study: Using Big Data to Analyze Cancer-Genome Relationships Tom Plunkett, Lead Author of the Oracle Big Data Handbook What does it take to build an award winning Big Data solution? This presentation takes a deep technical dive into the use of the Oracle Big Data Appliance in a project for the National Cancer Institute's Frederick National Laboratory for Cancer Research. The Frederick National Laboratory and the Oracle team won several awards for analyzing relationships between genomes and cancer subtypes with big data, including the 2012 Government Big Data Solutions Award, the 2013 Excellence.Gov Finalist for Innovation, and the 2013 ComputerWorld Honors Laureate for Innovation. [30 mins] Getting Value from Big Data Variety Richard Tomlinson, Director, Product Management, Oracle Big data variety implies big data complexity. Performing analytics on diverse data typically involves mashing up structured, semi-structured and unstructured content. So how can we do this effectively to get real value? How do we relate diverse content so we can start to analyze it? This session looks at how we approach this tricky problem using Endeca Information Discovery. [30 mins] How To Leverage Your Investment In Oracle Business Intelligence Enterprise Edition Within a Big Data Architecture Oracle ACE Director Kevin McGinley More and more organizations are realizing the value Big Data technologies contribute to the return on investment in Analytics. But as an increasing variety of data types reside in different data stores, organizations are finding that a unified Analytics layer can help bridge the divide in modern data architectures. This session will examine how you can enable Oracle Business Intelligence Enterprise Edition (OBIEE) to play a role in a unified Analytics layer and the benefits and use cases for doing so. [30 mins] Oracle Data Integrator 12c As Your Big Data Data Integration Hub Oracle ACE Director Mark Rittman Oracle Data Integrator 12c (ODI12c), as well as being able to integrate and transform data from application and database data sources, also has the ability to load, transform and orchestrate data loads to and from Big Data sources. In this session, we'll look at ODI12c's ability to load data from Hadoop, Hive, NoSQL and file sources, transform that data using Hive and MapReduce processing across the Hadoop cluster, and then bulk-load that data into an Oracle Data Warehouse using Oracle Big Data Connectors. We will also look at how ODI12c enables ETL-offloading to a Hadoop cluster, with some tips and techniques on real-time capture into a Hadoop data reservoir and techniques and limitations when performing ETL on big data sources. [90 mins] Register now!

    Read the article

< Previous Page | 599 600 601 602 603 604 605 606 607 608 609 610  | Next Page >