Search Results

Search found 21802 results on 873 pages for 'erx vb next coder'.

Page 472/873 | < Previous Page | 468 469 470 471 472 473 474 475 476 477 478 479  | Next Page >

  • Remapping Home/End from PC to Mac Via Synergy is not client specific.

    - by DtBeloBrown
    This question asks about the end key but the answers give no examples: http://superuser.com/questions/60052/what-key-works-like-end-using-a-mac-with-synergy If they had, I am guessing that they would likely have run into this problem. Adding lines like the bottom two of this: section: options keystroke(End) = keystroke(Control+Right,myiMac) keystroke(Home) = keystroke(Control+Left,myiMac) to my synergy.sgc in MyDocuments on the winXP machine would work but causes the keys to stop functioning on the winXP machine. Unacceptable. I next tried a compromise: keystroke(End) = keystroke(Control+Right,myiMac); keystroke(End,myPc) keystroke(Home) = keystroke(Control+Left,myiMac); keystroke(Home,myPc) Expecting that to broadcast the keystrokes to both machines regardless of which one was the Active Screen. That and many other variations did not work. What am I doing wrong? Has someone actually done this?

    Read the article

  • Changing the BizTalk message output file name

    - by Bill Osuch
    By default, BizTalk creates the filename of the message dropped to a send port as %MessageID%, which is the unique identifier (GUID) of the message. What if you want to create your own filename? To start, create a simple schema, and a basic orchestration that will receive the message and send it right back out, like this: If you deploy this and wire up the ports, you can drop an xml file into your receive port and have it come out at your send port named something like {7A63CAF8-317B-49D5-871F-9FD57910C3A0}.xml. Now, we'll create a new message with a custom filename. First, create a new orchestration variable called NewFileName, of the type System.String. Next, create a second message using the same schema as the message you're receiving in the Receive shape. Now, drag a Construct Message shape to the orchestration. In the shape's properties, set Messages Constructed to be the new message you just created. Double click the Message Assignment shape (inside the Construct shape...) and paste in the following code: Message_2 = Message_1;   NewFileName = Message_1(FILE.ReceivedFileName); NewFileName = NewFileName.Replace(".xml","_"); NewFileName = NewFileName + "output_" + System.DateTime.Now.Year.ToString() + "-" + System.DateTime.Now.Month.ToString();   Message_2(FILE.ReceivedFileName) = NewFileName; Here we make a copy of the received message, get it's original file name (ReceivedFileName), replace its extension with an underscore, and date-stamp it. Finally, add a Send shape and a Port to the surface, and configure them to send the message you just created. You should wind up with an orchestration like this: Deploy it, and create a new send port. It should be just about identical to the first send port, except this time the file name will be "%SourceFileName%.xml" (without the quotes of course). Fire up the application, drop in a test file, and you should now get both the xml file named with a GUID, and a second file named something along the lines of "MySchemaTestFile_output_2011-6.xml".

    Read the article

  • Collision Detection Code Structure with Sloped Tiles

    - by ProgrammerGuy123
    Im making a 2D tile based game with slopes, and I need help on the collision detection. This question is not about determining the vertical position of the player given the horizontal position when on a slope, but rather the structure of the code. Here is my pseudocode for the collision detection: void Player::handleTileCollisions() { int left = //find tile that's left of player int right = //find tile that's right of player int top = //find tile that's above player int bottom = //find tile that's below player for(int x = left; x <= right; x++) { for(int y = top; y <= bottom; y++) { switch(getTileType(x, y)) { case 1: //solid tile { //resolve collisions break; } case 2: //sloped tile { //resolve collisions break; } default: //air tile or whatever else break; } } } } When the player is on a sloped tile, he is actually inside the tile itself horizontally, that way the player doesn't look like he is floating. This creates a problem because when there is a sloped tile next to a solid square tile, the player can't move passed it because this algorithm resolves any collisions with the solid tile. Here is a gif showing this problem: So what is a good way to structure my code so that when the player is inside a sloped tile, solid tiles get ignored?

    Read the article

  • ADDS: 1 - Introducing and designing

    - by marc dekeyser
    Normal 0 false false false EN-GB X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;} What is ADDS?  Every Microsoft oriented infrastructure in today's enterprises will depend largely on the active directory version built by Microsoft. It is the foundation stone on which all other products (Exchange, update services, office communicator, the system center family, etc) rely on to get their information. And that is just looking at it from an infrastructure perspective. A well designed and implemented Active Directory implementation makes life for IT personnel and user alike a lot easier. Centralised management and the abilities opened up  by having it in place are ample.  But what is Active Directory Domain Services? We can look at ADDS as a centralised directory containing all objects your infrastructure runs on in one way or another. Since it is a Microsoft product you'll obviously not be seeing linux or mac clients listed in here (exceptions exist) but in general we can say it contains everything your company has in place in one form or another.  The domain name services. The domain naming service (or DNS for short) is a service which translates IP address (the identifiers for each computer in your domain) into readable and easy to understand names. This service is a prequisite for ADDA to work and having wrong record in a DNS server will make any ADDS service fail. Generally speaking a DNS service will be run on the same server as the ADDS service but it is worth wile to remember that this is not necessary. You could, for example, run your DNS services on a linux box (which would need special preparing to host an ADDS integrated DNS zone) and run the ADDS service of another box… Where to start? If the aim is to put in place a first time implementation of ADDS in your enterprise there are plenty of things to consider depending on what you are going to do in the long run. Great care has to be taken when first designing and implementing as having it set up wrong will cause a headache down the line. It is for that reason that I like to start building from the bottom up and start with a generic installation of ADDS (which will still differ for every client) and make it adaptable for future services which can hook in to the existing environment. Adapting existing environments is out of scope for this document (and series) although it is possible to take the pointers and change your existing environment to run in a smoother manor. Take great care when changing things as one small slip of the hand can give you a forest wide failure… Whenever starting with an ADDS deployment I ask the client the following questions:  What are your long term plans and goals?  How flexible do you want it? Are you currently linux heavy and want to keep this or can we go for an all Microsoft design? Those three questions should give some sort of indicator what direction can be taken and if the client has thought about some things themselves :).  The technical side of things  What is next to consider is what kind of infrastructure is already in place. For these series I'll keep it simple and introduce some general concepts without going in to depth on integrating ADDS with other DNS services.  Building from the ground up means we need to consider our layers on which our infrastructure will rely. In my view that goes as follows:  Network (WAN/LAN links and physical sites DNS Namespacing All in one domain or split up in different domains/forests? Security (both for ADDS and physical sites) The network side of things  Looking at how the network is currently set up can potentially teach us a large deal about the client. Do they have multiple physical site? What network speeds exist between these sites, etc… Depending on this information we will design our site links (which controls replication) in future stages. DNS Namespacing Maybe the single most intresting thing to know is what the domain will be named (ADDS will need a DNS domain with the same name) and where this will be hosted. Note that active directory can be set up with a singe name (aka contoso instead of contoso.com) but it is highly recommended to never do this. If you do end up with a domain like that for some reason there will be a lot of services that are going to give you good grief in the future (exchange being one of them). So one of the best practises would be always to use a double name (contoso.com or contoso.lan for example). Internal namespace A single namespace is just what it sounds like. You have a DNS domain which is different internally from what the client has as an external namespace. f.e. contoso.com as an external name (out on the internet) and contoso.lan on the internal network. his setup is has its advantages in that you have more obscurity from the internet in the DNS side of this but it will require additional work to publish services to the web. External namespace Quite like the internal namespace only here you do not differ the internal namespace of the company from what is known on the internet. In this implementation you would host your own DNS servers for the external domain inside the network. Or in other words, any external computer doing a DNS lookup would contact your internal DNS server for the resolution. Generally speaking this set up is a bad idea from the security side of things. Split DNS Whilst using an external namespace design is fairly easy it involves a lot of security risks. Opening up you ADDS DSN servers for lookups exposes your entire network to the internet and should be avoided at any cost. And that is where the "split DNS" design comes in. In this setup up would still have the same namespace internally and externally but you would be using different DNS servers for lookups on the external network who have no records of your internal resources unless you explicitly publish them. All in one or not? In determining your active directory design you can look at the following possibilities:  Single forest, Single domain Single forest, multiple domains Multiple forests, multiple domains I've listed the possibilities for design in increasing order of administrative magnitude. Microsoft recommends trying to use a single forest, single domain in as much situations as possible. It is, however, always possible that you require your services to be seperated from your users in a resource forest with trusts set up between the different forests. To start out I would go with the single forest design to avoid complexity unless there are strict requirements to have multiple forests. Security What kind of security is required on the domain and does this reflect the physical security on the sites? Not every client can afford to have a domain controller in a secluded server room on every site and it is exactly for that reason that Microsoft introduced the RODC (read only domain controller). A RODC is a domain controller that has been limited in functionality, in essence it will only cache the data you explicitly tell it to cache and in the case of a DC compromise (it being stolen) only a limited number of accounts will need to be affected. Th- Th- Th- That’s all folks! Well at least for now! In future editions of this series we’ll be walking through the different task that need to be done and the thought which needs to be put in to it. But for all editions we’ll be going from the concept of running a single forest, single domain with a split DNS setup… See you next time!

    Read the article

  • Conflict between variable substitution and CJK characters in BASH

    - by AndreasT
    I encountered a problem with variable substitution in the BASH shell. Say you define a variable a. Then the command $> echo ${a//[0-4]/} prints its value with all the numbers ranged between 0 and 4 removed: $> a="Hello1265-3World" $> echo ${a//[0-4]/} Hello65-World This seems to work just fine, but let's take a look at the next example: $> b="?1265-3?" $> echo ${b//[0-4]/} ?1265-3? Substitution did not take place: I assume that is because b contains CJK characters. This issue extends to all cases in which square brackets are involved. Surprisingly enough, variable substitution without square brackets works fine in both cases: $> a="Hello1265-3World" $> echo ${a//2/} Hello165-3World $> b="?1265-3?" $> echo ${b//2/} ?165-3? Is it a bug or am I missing something? I use Lubuntu 12.04, terminal is lxterminal and echo $BASH_VERSION returns 4.2.24(1)-release. EDIT: Andrew Johnson in his comment stated that with gnome-terminal 4.2.37(1)-release the command works fine. I wonder whether it is a problem of lxterminal or of its specific 4.2.24(1)-release version.

    Read the article

  • multi-dimension array problem in RGSS (RPG Maker XP)

    - by AzDesign
    This is my first day code script in RMXP. I read tutorials, ruby references, etc and I found myself stuck on a weird problem, here is the scenario: I made a custom script to display layered images Create the class, create an instance variable to hold the array, create a simple method to add an element into it, done The draw method (skipped the rest of the code to this part): def draw image = [] index = 0 for i in [email protected] if image.size > 0 index = image.size end image[index] = Sprite.new image[index].bitmap = RPG::Cache.picture(@components[i][0] + '.png') image[index].x = @x + @components[i][1] image[index].y = @y + @components[i][2] image[index].z = @z + @components[i][3] @test =+ 1 end end Create an event that does these script > $layerz = Layerz.new $layerz.configuration[0] = ['root',0,0,1] > $layerz.configuration[1] = ['bark',0,10,2] > $layerz.configuration[2] = ['branch',0,30,3] > $layerz.configuration[3] = ['leaves',0,60,4] $layerz.draw Run, trigger the event and the result : ERROR! Undefined method`[]' for nil:NilClass pointing at this line on draw method : image[index].bitmap = RPG::Cache.picture(@components[i][0] + '.png') THEN, I changed the method like these just for testing: def draw image = [] index = 0 for i in [email protected] if image.size > 0 index = image.size end image[index] = Sprite.new image[index].bitmap = RPG::Cache.picture(@components[0][0] + '.png') image[index].x = @x + @components[0][1] image[index].y = @y + @components[0][2] image[index].z = @z + @components[0][3] @test =+ 1 end I changed the @components[i][0] to @components[0][0] and IT WORKS, but only the root as it not iterates to the next array index Im stuck here, see : > in single level array, @components[0] and @components[i] has no problem > in multi-dimension array, @components[0][0] has no problem BUT > in multi-dimension array, @components[i][0] produce the error as above > mentioned. any suggestion to fix the error ? Or did I wrote something wrong ?

    Read the article

  • Network / Internet diagnostic tool to locate an error?

    - by Jesper
    Hi, I’m facing some difficulties with the Internet / network at my work and I have trouble locating the precise error and when and how it occurs. The problem is that the client machines in the house sporadic is disconnected to the Internet. I’m some what new so I haven’t that much inside in the network and apparently neither has my predecessor. What I am requesting and hoping you guys knows about is, if there exist some kind of network monitor tool I can install and run and it will periodically check the network, the Internet connection etc. and record to logs. Then, if there suddenly arises a problem some time of the day in some part of the network or the Internet connection, I can check it perhaps the next day. I’ve just downloaded and installed Microsoft Network Monitor 3.3 application and hopeful it can give me some answers on where the instability is located but I still would like a tool to make different checks and test in some time interval. Do anyone know about such a program or another kind of performance / diagnostic tool / method I can use? Sincere Jesper

    Read the article

  • How to fix Failed to initialize Windows Azure storage emulator error

    - by ybbest
    When you press F5 to start debugging Azure project, you might get the following exception: If you go to the Output windows, you will see the detailed error message below: Windows Azure Tools: Failed to initialize Windows Azure storage emulator. Unable to start Development Storage. Failed to start Development Storage: the SQL Server instance ‘localhost\SQLExpress’ could not be found. Please configure the SQL Server instance for Development Storage using the ‘DSInit’ utility in the Windows Azure SDK. This is because by default, Azure uses the SQLExpress to start Development Storage. To fix this you can do the following: You need to open command prompt, and navigate to C:\Program Files\Windows Azure SDK\v1.4\bin\devstore (depending on your Azure version, the file path is slightly different.) Next, run DSInit /sqlInstance:. (. Means the SQL Server use the default instance, if you have name instance, you need to change. to the name of the SQL Server) After a short while, you should see the following windows showing the configuration succeeds. You can download a batch file here. References: http://msdn.microsoft.com/en-us/library/gg433132.aspx

    Read the article

  • OCS 2007 Access Edge Server Certificate issue

    - by BWCA
    We are currently building additional OCS 2007 R2 Access Edge Servers to handle additional capacity.  We ran into a SSL certificate issue when we were setting up the servers. Before running the steps to Deploy an Edge Server, we successfully imported our SSL certificate that we use for external access on all of the new servers.  After successfully completing the first three Deploy Edge Server steps one one of the new servers, we started working on Step 4: Configure Certificates for the Edge Server.  After selecting Assign an existing certificate from the common tasks list and clicking Next to select a certificate, there were no certificates listed as shown below.   The first thing we did was to use the Certificates mmc snap-in to review the SSL certificate information.  We noticed in the General tab that Windows does not have enough information to verify this certificate and in the Certification Path that the issuer of this certificate could not be found for the SSL certificate that we imported successfully earlier.     While troubleshooting, we learned that we could not access the URL for the certificate’s CRL to download the CRL file due to restrictive firewall rules between the new OCS 2007 R2 Access Edge Servers and the Internet. After modifying the firewall rules, we were able to download the CRL file and when we reran Step 4 to assign an existing certificate, the certificate was listed.

    Read the article

  • SQL Performance Problem IA64

    - by Vendoran
    We’ve got a performance problem in production. QA and DEV environments are 2 instances on the same physical server: Windows 2003 Enterprise SP2, 32 GB RAM, 1 Quad 3.5 GHz Intel Xeon X5270 (4 cores x64), SQL 2005 SP3 (9.0.4262), SAN Drives Prod: Windows 2003 Datacenter SP2, 64 GB RAM, 4 Dual Core 1.6 GHz Intel Family 80000002, Model 6 Itanium (8 cores IA64), SQL 2005 SP3 (9.0.4262), SAN Drives, Veritas Cluster I am seeing excessive Signal Wait Percentages ( 250%) and Page Reads /s (50) and Page Writes /s (25) are both high occasionally. I did test this query on both QA and PROD and it has the same execution plan and even the same stats: SELECT top 40000000 * INTO dbo.tmp_tbl FROM dbo.tbl GO Scan count 1, logical reads 429564, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. As you can see it’s just logical reads, however: QA: 0:48 Prod: 2:18 So It seems like a processor related issue, however I’m not sure where to go next, any ideas? Thanks, Aaron

    Read the article

  • How to PERMANENTLY disable touchpad tap-to-click on Dell Inspiron/Windows 7

    - by Graham
    Hi all - my first time here, so I hope you can help. I've seen a lot of stuff on various forums (including here) about disabling the annoying "tap" function on a laptop touchpad. I learned the hard way not to de-install the driver (as the software suggests), since you then loose the Synaptic tab in the mouse control settings, and with it all means to modify the touchpad settings ... indicentally, if this happens to you, reboot in safe mode and do a restore, and the synaptic tab comes back. Not ideal, I know, but it works. Anyway, I have the most up-to-date drivers, and I can go to the Synaptics tab and can disable the tap-to-click function no problem. However, next time the machine is booted, tap-to-click is back on. It can alsways be disabled, but it's pain having to reset it every time the mchine is powered up. Is there a way to permanently disable it, once and for all? Thanks in advance, Graham

    Read the article

  • Workflow: Operate Zones

    - by Owen Allen
    The Operate Zones workflow is another of the workflow documents that we introduced recently. It follows naturally after the Deploy Oracle Solaris 11 Zones workflow that I talked about last week, so I thought I'd talk about it next. This workflow is less linear than the zone deployment workflow. It's built around this image: The left side shows you the prerequisites for zone operation: you have to deploy libraries and deploy either Oracle Solaris 10 or 11 zones - whichever type you want to manage using this workflow. Once you have the zones deployed, you can begin to operate them. If you want to associate resources with the global zone, the workflow directs you to the Exploring Your Server Pools how-to, which talks about adding global zones to server pools and associating libraries and network resources with them. Otherwise, it directs you to a set of how-tos about zone management: Managing the Configuration of a Zone, which explains how to add storage, edit zone attributes, and connect zones to networks; Lifecycle Management of Zones, which explains how to halt, shut down, boot, reboot, or delete a zone; and Migrating Zones, which explains how to move a zone to a new global zone in the same server pool. Finally, it directs you to the Update Oracle Solaris workflow when you want to update your zones, and to the Monitor and Manage Incidents workflow to learn more about monitoring your assets.

    Read the article

  • Windows7 NFS with linux server

    - by Vitaly
    Hi. I have an Ubuntu server and want to access its web folder (/var/www). What I done: installed nfs-kernel-server, nfs-common and portmap (as in faq) Setted up /etc/exports: /var/www 192.168.1.0/255.255.255.0(rw,no_roow_squash,async,subtree_check) Then: sudo exportfs -ra Then: sudo /etc/init.d/nfs-kernle-server restart I checked, if all works on same machine: sudo 192.168.1.101:/var/www /mnt/test Then accessed /mnt/test and seen that all data present and all ok. Next, I tried to connect this folder to windows7 using NFS client: First, I checked, that linux exported path successfully: showmount -e 192.168.1.101 /var/www 192.168.1.0/255.255.255.0 All ok, go to mount: mount -o anon 192.168.1.101:/var/www z: Console said, that all success.. but. I cant access drive Z (drive exists in the system and point to right folder). When I try to access drive Z my Explorer just going to sleep and then say that timeout expired. Help me please.

    Read the article

  • Server vendor that allows 3rd party disks

    - by Alvin S
    As noted here, Dell is no longer allowing 3rd party disks to be used with their latest servers. As in, they don't work period. Which means that if you buy one of these boxes and want to upgrade the storage later, you have buy disks from Dell at significant premiums. Dell has just given me a very strong reason to take my server business elsewhere. My company buys (instead of leasing) our servers, and typically uses them for 5 years. I need to be able to upgrade/repurpose storage periodically, and do not want to be locked in to whatever Dell might have in stock, at inflated prices to boot. As you will see in the comments of the above link, it seems HP is doing the same thing. I am looking for a server vendor that offers 3-5 year warranty with same day/next day onsite service, and allows me to use 3rd party disks. Suggestions?

    Read the article

  • Can't adjust backlight on an Nvidia 335m GT

    - by Vladimir
    I have a laptop mySN QMG6 / Chiligreen Mobilitas NW which is Quanta TW9 barebone with intel i3 and nvidia 335m GT onboard. On ubuntu distros 10.04, 10.10, 11.04 and 11.10 i had problem with changing screen backlight with nouveau and nvidia drivers. FN+F4/F5 buttons did not change my brightness. I tried to edit xorg.conf, adding Option “RegistryDwords” “EnableBrightnessControl=1? Also tried to add some lines to grub acpi_osi="Linux" acpi_backlight=vendor Neither worked for me. Today I installed Ubuntu 12.04 beta2 and... With nouveau driver my FN key works, and changes the brightness (is it a new 3.0.22 linux kernel, or patched nouveau driver, i don't know). This is a big step forward. But, when installing proprietary nvidia driver (295.33) FN button stops working and i can't change brightness. I also tried workaround with xorg and grub with no result. Tried to install acpi from apt - no result. Is there anything left to try? I really need that nvidia driver working with FN keys, as i would like to have a working 3D acceleration. P.S. Does the nouveau driver has 3d acceleration like nvidia drivers??? If there is need to provide some log data, please write what should i print, as i'm a bit new to Ubuntu. P.P.S. Same problems i had with other Linux distros (Mint, Fedora and others) P.P.P.S. Other FN buttons work with both drivers (Mute, VOL UP/DOWN, WiFi on/off, Bluetooth, Sleep, Start/Pause, Stop, Next/Prev song) Some new thoughts... CONFIG_BACKLIGHT_GENERIC=m could this be an issue? Made this by grep BACKLIGHT /boot/config-3.2.0-22-generic-pae Full grep output can be viewed here: http://pastebin.com/sMRd2Z4k

    Read the article

  • Webcast - Oracle Database In-Memory Option

    - by Thanos Terentes Printzios
    Next to the recent announcement by Larry Ellison on the Future of the Database, we are happy to share this exclusive series of live webcasts from Oracle Database Product Management, where you can learn more about the brand new Oracle Database 12c In-Memory option. Oracle Database In-Memory is Oracle’s new memory-optimized technology that transparently accelerates analytic, data warehousing, and reporting workloads, while also accelerating transaction processing (OLTP) workloads. Participants will learn about Oracle Database In-Memory benefits, features, and leading edge architecture.  The Database In-Memory architecture provides the ability to easily process data orders of magnitude faster by simply enabling the feature and identifying tables to bring in-memory without application changes. Details on Oracle Database In-Memory’s ease of use and management, scalability, and availability will also be covered. Please join us to learn more about Oracle Database In-Memory and get first-hand knowledge of this important new feature. Delivery Format This FREE online LIVE eSeminar will be delivered over the Web.These Oracle webcasts are FREE for Customers, System Integrators, ISVs, VARs and Platform Partners. Presenter: Richard Jacobs, Oracle Solution Architect  Europe Webcast 1 Date: August 29, 2014 @ 10:00 am to 11:00 am Central European Summer Time (CEST)Register Here! Europe Webcast 2 Date: September 29, 2014 @ 10:00 am to 11:00 am Central European Summer Time (CEST)Register Here!

    Read the article

  • How well will ntpd work when the latency is highly variable?

    - by JP Anderson
    I have an application where we are using some non-standard networking equipment (cannot be changed) that goes into a dormant state between traffic bursts. The network latency is very high for the first packet since it's essentially waking the system, waiting for it to reconnect, and then making the first round-trip. Subsequent messages (provided they are within the next minute or so) are much faster, but still highly-latent. A typical set of pings will look like 2500ms, 900ms, 880ms, 885ms, 900ms, 890ms, etc. Given that NTP uses several round trips before computing the offset, how well can I expect ntpd to work over this kind of link? Will the initially slow first round trip be ignored based on the much different (and faster) following messages to/from the ntp server? Thanks and Regards.

    Read the article

  • IPv6 static routes

    - by user98651
    I am looking to configure a few hosts with IPv6 on my network. The router (running CentOS 5) is configured with an Hurricane Electric (HE) tunnel which works fine on that host. However, I would like to statically add a few additional hosts on the same LAN to have IPv6 through this tunnel. No, I don't want radvd or dhcpv6 to do the work for me in this case. I already have IPv6 forwarding enabled in sysctl.conf. I am looking for help with the next steps (statically adding the routes). Lets say the IP addresses are as follows: Router: 2001:470:1b07:1:: Host1: 2001:470:1b07:2:: How would I go about making them see each other? Thanks in advance for the help.

    Read the article

  • How to disable Microsoft eHome MCIR Keyboard and company?

    - by AndrejaKo
    Hi! I'm and unlucky owner of and Acer 7720G laptop which, like many in its category, has receiver for a proprietary infra red remote control device (which I did not receive with my laptop!) . Now my problem is that the receiver is detected as Microsoft eHome MCIR Keyboard, Microsoft eHome MCIR 109 Keyboard and Microsoft eHome Remote Control Keyboard keys. My problem is that this driver has incompatibilities with some programs I use like for example DosBox. When these devices are installed, they cause DosBox to incorrectly detect some keyboard buttons. The workaround is to remove or disable the 3 hardware devices. Unfortunately, I the disable option is grayed out and when I delete them, they are reinstalled on next restart. Is there any way to hack windows in order to prevent their installation? I was thinking about locating the drivers these devices use, but they are buried somewhere in windows installation and I don't have enough experience to find them, so I'm asking you for help.

    Read the article

  • How to change controller numbering/enumeration in Solaris 10?

    - by Jim
    After moving a Solaris 10 server to a new machine, the rpool disk is now c1t0d0. We have some third party applications hard coded for c0t0d0. How can I change the controller enumeration on this machine? There is no longer a c0. I've tried rebuilding the /etc/path_to_inst, but the instance numbers don't seem to match up with the controller numbers. Also, it's not clear if i86pc platforms use this file. I've tried devfsadm -C to clear the dangling links, but I'm not sure how to cause devfsadm to start numbering from 0 again (or force certain devices in the tree to a specific controller number). Next I am going to try to create the symlinks manually in /dev/dsk and rdsk to point to the correct /devices. I feel like I am going way off path here. Any suggestions? Thanks

    Read the article

  • Practical Approaches to increasing Virtualization Density-Part 1

    - by Girish Venkat
    Happy New year everyone!. Let me kick start the year off by talking about Virtualization density.  What is it?The number of virtual servers that a physical server can support and it's increase from the prior physical infrastructure as a percentage. Why is it important?This is important because the density should be indicative of how well the server is getting consumed?So what is wrong ?Virtualization density fails to convey the "Real usage" of a server.  Most of the hypervisor based O/S Virtualization  evangelists take pride in the fact that they are now running a Virtual Server farm of X machines compared to a Physical server farm of Y (with Y less than X obviously). The real question is - has your utilization of the server really increased or not.  In an internal study that was conducted by one of the top financial institution - the utilization of servers only went up by 15% from 30 to 45. So, this really means that just by increasing virtualization density one will not be achieving the goal of using up the servers in their server farm better.  I will write about what the possible approaches are to increase virtualization density in the next entry. 

    Read the article

  • VSS Not Creating Shadow Set

    - by Jeff Leyser
    I'm trying to setup backup scripts on WinXP to use Volume Shadow Sets. I downloaded the VSS 7.2 SDK from MSFT, and used the include vshadow.exe to create a shadow set: vshadow -script=vss-setvar.cmd f: (note that I've tried both f: and c:) vshadow executes just find, giving no errors, reporting the shadow is created. However, executing vshadow -q as the very next command results in "There are no shadows on the system" and, indeed, if I use dosdev to try and map the Shadow set named in vss-setvar.cmd, it will not work. Am I missing a step?

    Read the article

  • Project Manager that wants to lock in time estimate with a signed contract

    - by sunpech
    At a previous employment, a project manager (PM) wasn't satisfied with the delivery time of the code on a project I was on. I was told by my project lead that that the PM was considering having me sign a contract to lock-in my time estimates I gave for tasks and delivery dates. The situation on the project was that we were working with new technologies, codebase, coding standards, and very prone-to-change requirements. I was learning new things and applying them the best I could on requirements that kept on changing. The requirements throughout the iterations grew by 2-3 times, with my estimate-to-complete growing by roughly 5-8 times. The only things that didn't change were the estimates and delivery dates. Yes, I did end up missing most deadlines. And I was working on some very new technologies that no one else on the entire development team could really help out on because they wouldn't be familiar with it. At least not easily. It seemed to me then, that the PM wanted his numbers to add up-- and thus wanted me to sign a contract to "ensure" that I would always deliver working code on time. I suppose with a signed contract the PM could use it against me if I couldn't deliver on time. I believe what happened next was that other project managers and/or project leads defended me, and didn't let this happen. My question is, should this raise a red flag about the manager? Is it common practice for a manager to lock-in time estimates of a software developer with a signed contract? Or in this case, try to. Please note, I was a full time employee, not an independent consultant. Update: I want to add that I did give new estimates weekly, but it seems the original estimates and delivery dates were what the PM was fixated on.

    Read the article

  • links for 2010-05-06

    - by Bob Rhubart
    Podcast: Collaborate 10 Wrap-Up - Conclusion #c10 More Collaborate 2010 Las Vegas highlights and hijinks from this ten-member panel, including OAUG and ODTUG board members, members of the Oracle ACE program, and OAUG President Dave Ferguson. (tags: otn oracle collaborate2010) Peter Scott: Realtime Data Warehouse Loading Rittman-Mead's Peter Scott looks at putting data in to a data warehouse in real time. (tags: oracle datawarehousing businessintelligence) Live Webcast: Social BPM - Integrating Enterprise 2.0 with Business Applications - May 12, 2010 at 11:00 a.m. PT Business Process Management with integrated Enterprise 2.0 collaboration can improve business responsiveness and enhance overall enterprise productivity. Learn how to take your business to the next level with a unified solution that fosters process-based collaboration between employees, partners, and customers. (tags: oracle otn bpm enterprise2.0 webcast) Management Pack for Identity Management Viewlet A screencast produced by the Grid Control team showing the features of the Identity Management Pack for Grid Control 11g. Grid Control 11g now works with Oracle Virtual Directory 11g. (tags: oracle otn security identitymanagement) @pevansgreenwood: Having too much SOA is a bad thing (and what we might do about it) "The problem is usually too much flexibility, as flexibility creates complexity, and complexity exponentially increases the effort required to manage and deliver the software." -- Peter Evans-Greenwood (tags: soa complexity flexibility) @vampbenepe: Integration patterns for social data: the Open Social Data Bus "The main point is about defining the right integration pattern for social data: is it a 'message bus' pattern or a 'shared database' pattern?" -- William Vampbenepe (tags: oracle otn enterprise2.0 enterprisearchitecture)

    Read the article

  • /var/log/secure user activity. also, httpd can not start without two users

    - by user52869
    hello, i found some strange informations in /var/log/secure file: Feb 10 02:02:04 server2364 usermod[30750]: unlock user `username1' password Feb 10 02:02:04 server2364 usermod[30811]: lock user `username2' password Feb 10 02:05:16 server2364 usermod[30992]: unlock user `username2' password Feb 10 02:05:18 server2364 usermod[31114]: unlock user `username1' password username1 and username2 are two usernames on system, that have no ability to login. for every night in 02:02h results like that are in /var/log/secure file. one more thing: files /etc/shadow, and /etc/shadow have timestamps 02:05h. what can be cause for it? next thing, if i remove those two accounts (username1 and username2), i can not start web server. can you help me with some ideas, am i hacked?

    Read the article

< Previous Page | 468 469 470 471 472 473 474 475 476 477 478 479  | Next Page >