Search Results

Search found 19586 results on 784 pages for 'machine instruction'.

Page 463/784 | < Previous Page | 459 460 461 462 463 464 465 466 467 468 469 470  | Next Page >

  • DNS updating issue

    - by Will
    Hey guys, I'm new to serverfault so please excuse me if I sound a tad nub. I work in an environment that is kinda peace mealed together, and I honestly don't know how it works. I'm new to the IT field and am still in school. When I replace a PC I rename the old one to mo-o-pcname and give the new one the proper name of mo-pcname (mo is a location thing we use so it really doesn't apply to the problem.) The new PC will function on the network; it will have the ability to access network resources (printers, file shares, etc) and it will have the ability to get out to the internet. However I can no longer ping the machine. It would appear as if the DNS (A) record is not getting updated or something. Like I said I'm kinda new to the field and just trying to work through this problem. Thanks for your help.

    Read the article

  • Meetings Disappearing from Outlook 2010 Public Calendar

    - by Neil
    We are experiencing a frustrating issue with our Public Calendars in Outlook 2010. Meetings that have been scheduled months in advance are missing, but will then reappear. If user A logs in @ 9:30 and goes to the calendar, certain meetings will be missing. 15 Minutes later, when user B logs in, the meetings are there. It is not tied to the actual user- I have seen this issue occur with the order of logging in reversed. These are with meetings that were posted to the calender months ago, so it should not be an issue of an item being updated. We have not upgraded our Exchange environment (still running on 2003), but this is a new machine, running Windows 7 Professional, on a domain, running office 2010. Are there any quirks or settings that I am missing or not aware of?

    Read the article

  • Apache w/out internet connection

    - by robert knobulous
    I have a Vista laptop that I have been running Apache / MySql / Php / PhpMyAdmin on for quite some time without fail. I just use it to test bits of code here and there etc. No problems, until recently when I needed to test something and I happened to be in a place that I could not get an internet connection. Why am I unable to access localhost from the same machine without an internet connection? I am type http://localhost..etc into the browser's address bar and I get the message that I am unable to access without an internet connection. I checked my windows/system32/etc/hosts file and the first two lines are 127.0.0.1 localhost ::1 localhost What am I missing here?

    Read the article

  • Exalogic&ndash;The One Day Installation Challenge

    - by james.bayer
    It’s a really exciting time for the extended WebLogic community as we are enjoying seeing the impressive results of Exalogic deployments.  At Oracle Open World, a lot of people I spoke with came away impressed with the raw performance.  However, Exalogic offers a lot more than just raw performance.  I had the pleasure of working with Ram Sivaram during one of the Exalogic training sessions in Santa Clara.  In this video diary, he shows the Exalogic machine arrive on the shipping dock, get unpacked, wired up, powered on, configured, and installed with a WebLogic Server cluster in just about 10 hours.  I’ve worked with customers in the past that have taken several weeks or longer to get an environment ready after the hardware arrives.  This typically involves many different specialized teams in their organization.  Mohamad Afshar just wrote a great explanation of the benefit of Engineered Systems and contrasting that to the status quo.  Being able to streamline deployment of middleware capacity will have a lot of value for customers shortening time to deployment.  Thanks for the video Ram, you’ve set a high bar, we’ll see if anyone can top your time!  

    Read the article

  • IIS FTP service - download timeouts and restarts getting the data twice

    - by accel229
    We have an IIS FTP site on a Windows Server 2003 x64 machine. Application Layer Gateway service is disabled (so http://support.microsoft.com/kb/931130 does not apply). Windows Firewall service is disabled as well. Connection timeout for the FTP site (there is only one) is set to 1,200 seconds = 20 minutes. An external client can connect to the site, list directory contents and download small files. When a client attempts to download a large file (eg, if the download continues for 3 minutes, which is still under 20 minutes, but relatively long), the server sends all data, then the connection times out, the client issues REST / RETR commands attempting to restart the download since after the last byte (which I believe should succeed and receive exactly 0 bytes), and the server behaves as if the client tried to restart after byte 0, that is, it sends the entire file all over. Any ideas on how to fix this?

    Read the article

  • "Disk boot failure" error after installing Windows 7 on SSD

    - by Tony_Henrich
    I have a system with 3 SATA drives which runs fine. Got a new SSD drive and wanted to install a fresh Windows-7 on it. So I removed the boot drive and replaced it with the SSD drive. Installed Windows and when it was done, rebooted and now I get "Disk boot failure. Insert system disk and press enter" error message. I reinstall again and still same message. Removed the SSD and put back the original drive and I got the same message!! I checked the BIOS and things look good. Something is wrong. Two questions: 1- Why isn't the new Windows booting from the SSD? 2- Why isn't the machine booting using the previous working configuration anymore, after removing the SSD? I did connect it during the second Windows installation but it was the last drive in the SATA connector. Would Windows installer mess with its MBR sector?

    Read the article

  • Win 2008 R2 Server Not Recognizing Second Hard Drive

    - by Brian
    Hello, I just purchased a Dell server, which has two hard drives and no RAID setup. I can only currently see one hard drive... not sure how to get it to recognize the other, as I thought being a new machine that wouldn't be an issue. It has Windows Server 2008 R2 that I loaded on. I'm a n00b to all of this so I'm not sure why this is failing to work... Any help appreciated. Thanks.

    Read the article

  • Replicate Oracle to MySQL

    - by Rosdi
    I am developing a web apps, this web application will be using MySQL. Now I need to replicate my client's Oracle database into MySQL, only a few tables will be involved.. a table can be up to 2-3 million rows. I only have SELECT privilege on this Oracle, so don't ask me to install any kind of service on the Oracle machine. I have complete control on the MySQL side however. The replication is only one way (Oracle to MySQL). I can write a simple script to truncate MySQL table and repopulate it every night but I think this is very inefficient, there must be a better way. Is there any free tools I can use? Expensive database replication system is definitely out of the question.

    Read the article

  • VirtualBox - split partitioned VDI into separate VDIs

    - by mathematical.coffee
    I'm very new to VirtualBox. I set up an Arch Linux VM and a Ubuntu VM (Ubuntu host), both sharing the same .vdi like so (I had in my mind a dual-boot situation): VDI file (25GB) |- /dev/sda1: 5GB (Arch Linux) |- /dev/sda2: [Ubuntu] |- /dev/sda5 (swap, 1GB) |- /dev/sda6 Ubuntu /, 9GB |- /dev/sda7 Ubuntu /home, 10GB I've now realised that I don't want a dual-boot-type setup, I'd rather boot each machine independently (my initial thought was to share /home between Ubunto and Arch). So, my question: Can I split /dev/sda1 and /dev/sda2 each to their own .vdi files so I can use them as completely separate machines? I'd rather not have to re-install either Arch (because it took me ages to work it out!) or Ubuntu (because I've already done a few GB of updates and don't want to redo them). I haven't been able to find anything about this - most questions I see are about converting a .vdi to a partition on the host, or splitting a .vdi into multiple smaller files (that are not independent), or converting a partition on the host to a .vdi file. cheers.

    Read the article

  • Oracle Linux Pavilion is Back for Oracle OpenWorld

    - by Oracle OpenWorld Blog Team
    By Zeynep Koch Back by popular demand, Oracle will again host the Oracle Linux Pavilion at Oracle OpenWorld from October 1-3. The pavilion will be located in the Exhibition Hall at Moscone South, Booth 1033, next to the Oracle DEMOgrounds and Oracle Linux demopods. At the pavilion a select group of ISVs, IHVs, and SIs will showcase their products that have been Oracle Linux- and/or Oracle VM-certified. These certified products enable customer applications to run faster, thereby saving money.Partners exhibiting their solutions in the Oracle Linux Pavilion include: BeyondTrust: context-aware security intelligence for dynamic IT infrastructures such as cloud, mobile, and virtual technologies Centrify: control, secure, and audit access to cross-platform systems, mobile devices, and applications Data Intensity: cloud services and application management Fujitsu: technology platforms, private cloud, services, ubiquitous and device solutions HP: converged cloud, converged infrastructure, application transformation, and information optimization LSI: intelligent solid-state storage solutions for breakthrough database acceleration Mellanox: InfiniBand and Ethernet end-to-end server and storage interconnect solutions and services for data centers Micro Focus: mainframe solutions, application modernization and development tools, software quality tools NetApp: storage and data management QLogic: high performance networking Teleran: BI and data warehouse management solutions for Oracle Exadata Database Machine and Oracle Database Be sure to pick up your free Oracle Linux and Oracle VM DVD Kit if you visit one of these partners. We look forward to seeing you at the pavilion.

    Read the article

  • Install NPM Packages Automatically for Node.js on Windows Azure Web Site

    - by Shaun
    In one of my previous post I described and demonstrated how to use NPM packages in Node.js and Windows Azure Web Site (WAWS). In that post I used NPM command to install packages, and then use Git for Windows to commit my changes and sync them to WAWS git repository. Then WAWS will trigger a new deployment to host my Node.js application. Someone may notice that, a NPM package may contains many files and could be a little bit huge. For example, the “azure” package, which is the Windows Azure SDK for Node.js, is about 6MB. Another popular package “express”, which is a rich MVC framework for Node.js, is about 1MB. When I firstly push my codes to Windows Azure, all of them must be uploaded to the cloud. Is that possible to let Windows Azure download and install these packages for us? In this post, I will introduce how to make WAWS install all required packages for us when deploying.   Let’s Start with Demo Demo is most straightforward. Let’s create a new WAWS and clone it to my local disk. Drag the folder into Git for Windows so that it can help us commit and push. Please refer to this post if you are not familiar with how to use Windows Azure Web Site, Git deployment, git clone and Git for Windows. And then open a command windows and install a package in our code folder. Let’s say I want to install “express”. And then created a new Node.js file named “server.js” and pasted the code as below. 1: var express = require("express"); 2: var app = express(); 3: 4: app.get("/", function(req, res) { 5: res.send("Hello Node.js and Express."); 6: }); 7: 8: console.log("Web application opened."); 9: app.listen(process.env.PORT); If we switch to Git for Windows right now we will find that it detected the changes we made, which includes the “server.js” and all files under “node_modules” folder. What we need to upload should only be our source code, but the huge package files also have to be uploaded as well. Now I will show you how to exclude them and let Windows Azure install the package on the cloud. First we need to add a special file named “.gitignore”. It seems cannot be done directly from the file explorer since this file only contains extension name. So we need to do it from command line. Navigate to the local repository folder and execute the command below to create an empty file named “.gitignore”. If the command windows asked for input just press Enter. 1: echo > .gitignore Now open this file and copy the content below and save. 1: node_modules Now if we switch to Git for Windows we will found that the packages under the “node_modules” were not in the change list. So now if we commit and push, the “express” packages will not be uploaded to Windows Azure. Second, let’s tell Windows Azure which packages it needs to install when deploying. Create another file named “package.json” and copy the content below into that file and save. 1: { 2: "name": "npmdemo", 3: "version": "1.0.0", 4: "dependencies": { 5: "express": "*" 6: } 7: } Now back to Git for Windows, commit our changes and push it to WAWS. Then let’s open the WAWS in developer portal, we will see that there’s a new deployment finished. Click the arrow right side of this deployment we can see how WAWS handle this deployment. Especially we can find WAWS executed NPM. And if we opened the log we can review what command WAWS executed to install the packages and the installation output messages. As you can see WAWS installed “express” for me from the cloud side, so that I don’t need to upload the whole bunch of the package to Azure. Open this website and we can see the result, which proved the “express” had been installed successfully.   What’s Happened Under the Hood Now let’s explain a bit on what the “.gitignore” and “package.json” mean. The “.gitignore” is an ignore configuration file for git repository. All files and folders listed in the “.gitignore” will be skipped from git push. In the example below I copied “node_modules” into this file in my local repository. This means,  do not track and upload all files under the “node_modules” folder. So by using “.gitignore” I skipped all packages from uploading to Windows Azure. “.gitignore” can contain files, folders. It can also contain the files and folders that we do NOT want to ignore. In the next section we will see how to use the un-ignore syntax to make the SQL package included. The “package.json” file is the package definition file for Node.js application. We can define the application name, version, description, author, etc. information in it in JSON format. And we can also put the dependent packages as well, to indicate which packages this Node.js application is needed. In WAWS, name and version is necessary. And when a deployment happened, WAWS will look into this file, find the dependent packages, execute the NPM command to install them one by one. So in the demo above I copied “express” into this file so that WAWS will install it for me automatically. I updated the dependencies section of the “package.json” file manually. But this can be done partially automatically. If we have a valid “package.json” in our local repository, then when we are going to install some packages we can specify “--save” parameter in “npm install” command, so that NPM will help us upgrade the dependencies part. For example, when I wanted to install “azure” package I should execute the command as below. Note that I added “--save” with the command. 1: npm install azure --save Once it finished my “package.json” will be updated automatically. Each dependent packages will be presented here. The JSON key is the package name while the value is the version range. Below is a brief list of the version range format. For more information about the “package.json” please refer here. Format Description Example version Must match the version exactly. "azure": "0.6.7" >=version Must be equal or great than the version. "azure": ">0.6.0" 1.2.x The version number must start with the supplied digits, but any digit may be used in place of the x. "azure": "0.6.x" ~version The version must be at least as high as the range, and it must be less than the next major revision above the range. "azure": "~0.6.7" * Matches any version. "azure": "*" And WAWS will install the proper version of the packages based on what you defined here. The process of WAWS git deployment and NPM installation would be like this.   But Some Packages… As we know, when we specified the dependencies in “package.json” WAWS will download and install them on the cloud. For most of packages it works very well. But there are some special packages may not work. This means, if the package installation needs some special environment restraints it might be failed. For example, the SQL Server Driver for Node.js package needs “node-gyp”, Python and C++ 2010 installed on the target machine during the NPM installation. If we just put the “msnodesql” in “package.json” file and push it to WAWS, the deployment will be failed since there’s no “node-gyp”, Python and C++ 2010 in the WAWS virtual machine. For example, the “server.js” file. 1: var express = require("express"); 2: var app = express(); 3: 4: app.get("/", function(req, res) { 5: res.send("Hello Node.js and Express."); 6: }); 7:  8: var sql = require("msnodesql"); 9: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:tqy4c0isfr.database.windows.net,1433;Database=msteched2012;Uid=shaunxu@tqy4c0isfr;Pwd=P@ssw0rd123;Encrypt=yes;Connection Timeout=30;"; 10: app.get("/sql", function (req, res) { 11: sql.open(connectionString, function (err, conn) { 12: if (err) { 13: console.log(err); 14: res.send(500, "Cannot open connection."); 15: } 16: else { 17: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 18: if (err) { 19: console.log(err); 20: res.send(500, "Cannot retrieve records."); 21: } 22: else { 23: res.json(results); 24: } 25: }); 26: } 27: }); 28: }); 29: 30: console.log("Web application opened."); 31: app.listen(process.env.PORT); The “package.json” file. 1: { 2: "name": "npmdemo", 3: "version": "1.0.0", 4: "dependencies": { 5: "express": "*", 6: "msnodesql": "*" 7: } 8: } And it failed to deploy to WAWS. From the NPM log we can see it’s because “msnodesql” cannot be installed on WAWS. The solution is, in “.gitignore” file we should ignore all packages except the “msnodesql”, and upload the package by ourselves. This can be done by use the content as below. We firstly un-ignored the “node_modules” folder. And then we ignored all sub folders but need git to check each sub folders. And then we un-ignore one of the sub folders named “msnodesql” which is the SQL Server Node.js Driver. 1: !node_modules/ 2:  3: node_modules/* 4: !node_modules/msnodesql For more information about the syntax of “.gitignore” please refer to this thread. Now if we go to Git for Windows we will find the “msnodesql” was included in the uncommitted set while “express” was not. I also need remove the dependency of “msnodesql” from “package.json”. Commit and push to WAWS. Now we can see the deployment successfully done. And then we can use the Windows Azure SQL Database from our Node.js application through the “msnodesql” package we uploaded.   Summary In this post I demonstrated how to leverage the deployment process of Windows Azure Web Site to install NPM packages during the publish action. With the “.gitignore” and “package.json” file we can ignore the dependent packages from our Node.js and let Windows Azure Web Site download and install them while deployed. For some special packages that cannot be installed by Windows Azure Web Site, such as “msnodesql”, we can put them into the publish payload as well. With the combination of Windows Azure Web Site, Node.js and NPM it makes even more easy and quick for us to develop and deploy our Node.js application to the cloud.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • How much processor speed and cores do I need for these tasks?

    - by ajay
    I am planning to buy a new laptop as I find my current one very slow. My question here is specifically related to RAM size and CPU power. I will mostly be doing development (not much games). I would be dabbling in distributed computing, multithreaded and data intensive parallelizable tasks on multi-cores. For e.g. I would want to be able to Concurrent programming in Scala/Java/Clojure etc. and be able to see parallelization. Furthermore, I would want the RAM to be enough. But from a developer machine standpoint, do you think 4GB RAM and 2.53GHz Dual Core processor would be enough. I'm basically looking at this model: http://store.apple.com/us/configure/MC118LL/A?mco=MTM3NDcyODk (link dead)

    Read the article

  • Troubleshooting source of heavy resource-usage on a windows server 2008 running multiple sites

    - by batman_man
    Hi, I am running about 10 asp.net websites on a hosted virtual server. The server runs Server 2008 - each website is backed by its own database running on SQL server 2008 on the same box. Lately the box has seemed really slow. The only kind of discovery i could think of doing was looking in the task manager, where i can see w3wp and sqlserver.exe jumping to 40% cpu usage every 5-10 seconds. What are the steps i can take to determine which of my websites is taking these resources and or what database is getting hit the most? I have of course ssms installed on the machine as well. As you can tell, my sysadmin skills are very very limited - any help would be much appreciated.

    Read the article

  • Where is the bottleneck?

    - by jsymon
    There is a limit on connections somewhere along the line here... On a windows server 2008 machine, each request to a url running on localhost takes ~3 seconds to complete. This is fine and normal for the url. However, if i open the same localhost url in about 10 tabs, and set them to reload all at the same time, they finish sequentially, 3 seconds after each other. Meaning the last tab has taken 30 seconds to load (3s x 10). What is especially odd is that firebug reports each page as taking 3seconds to load. Another point to add is that the status bar just sits at 'done' for the last tab until 3 seconds before completing, where it then changes to 'waiting for localhost'. I am praying there is some connection limit somewhere otherwise this would be a disaster if more than one user ever visited the site at a time! Maybe a limit or something where one pc cant make more than 2 simultaneous requests to a url at a given time?

    Read the article

  • Limiting Sybase ASE 15 CPU usage on VM

    - by reiniero
    I've set up a single CPU Sybase ASE 15.7 test/hobby/experimentation system on a Debian Squeeze x64 KVM VM. I notice the CPU load goes to 100% and stays there. Definitely not a Sybase guru, only interested to see if some programs I'm running work on the database. Looking at Sybase docs it seems ASE detects the machine is idle and then takes over all processing just waiting for a connection (and if needed, doing some housekeeping apparently). Normally that would be fine but as it is running in a VM it's taking away processor resources other VMs could use - and the increased fan noise of the PC near my desk annoy me. I've tried to remedy this: set the "runnable process search count" parameter from DEFAULT (2000 IRC) to 3 in /opt/sybase/ASE-15_0/SYBASE.cfg from http://sybase.reygrobellet.com/tutorials/install_sybase_vb/standalone04_configure_oralin11#TOC-Configure-kernel I added this to my /etc/init.d/sybase startup script: echo 0 /proc/sys/kernel/randomize_va_space (though I don't think it'll make much difference) How can I tell Sybase to "behave" and not hog the processor - I don't mind reduced performance.

    Read the article

  • Keyboard / keymap problems with Xubuntu 12.04 + NX nomachine

    - by bajafresh4life
    I'm running NX client on my Macbook Pro to connect to a Xubuntu 12.04 desktop at work. I have configured NX client to start up a console upon connection. I am able to connect to my remote linux machine and I get a simple xterm console. However, when I run xfce4-session, half my keys no longer work. For example, when I launch a terminal, I typing a, s, or d works, but if type w, e, r, or t, the cursor just blinks. If I ctrl-C out of xfce4-session, all the keys work fine in my xterm console. If I run xev, this is the output for when I hit a key that works: KeyRelease event, serial 34, synthetic NO, window 0x2e00001, root 0x373, subw 0x0, time 170160781, (-45,-21), root:(824,429), state 0x4, keycode 16 (keysym 0x63, c), same_screen YES, XLookupString gives 1 bytes: (03) "" XFilterEvent returns: False KeyRelease event, serial 34, synthetic NO, window 0x2e00001, root 0x373, subw 0x0, time 170160781, (-45,-21), root:(824,429), state 0x4, keycode 67 (keysym 0xffe3, Control_L), same_screen YES, XKeysymToKeycode returns keycode: 63 XLookupString gives 0 bytes: XFilterEvent returns: False but when I hit a key that doesn't work: FocusOut event, serial 34, synthetic NO, window 0x2e00001, mode NotifyGrab, detail NotifyAncestor FocusIn event, serial 34, synthetic NO, window 0x2e00001, mode NotifyUngrab, detail NotifyAncestor KeymapNotify event, serial 34, synthetic NO, window 0x0, keys: 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Any ideas on what I can do to troubleshoot this issue? Googling around offered a few suggestions (like playing with xmodmap) but nothing seemed to work. Also, one thing worth mentioning is that I do not have any keyboard issues when remoting into a different Ubuntu 10 box via NX.

    Read the article

  • What are the performance differences between PCI-Express x16 and x4

    - by Cestarian
    I have two PCI-Express 2.0 x16 slots on my motherboard, but one of them is actually just x4. For passing through my graphics card to a virtual machine I need to use both slots simultaneously and unfortunately, the easiest way for me to achieve that is putting the stronger card in the x4 slot (secondary slot; I need the stronger card to be in the secondary slot, not the primary). As such I am wondering what sort of noticable performance differences I can expect from using the x4 slot with a strong card as opposed to having it in the true x16 slot. Does it limit the performance so much that the strong card in the x4 slot will actually perform worse than the significantly weaker card in the x16 slot? (For spec comparison I am using a GTX-670 in the x4 slot and GTX-550-Ti in the x16 slot) What implications does this have?

    Read the article

  • Run Logstalgia on Remote Global Apache Log On a WHM System

    - by macinjosh
    I work for a small web development shop. We have a dedicated Linux server running WHM. For fun we want to run Logstalgia on a machine in our office. We'd really like it to display information about all the traffic on our server. Logstalgia use Apache's access logs to generate its visuals, the problem I have is that by default WHM does not have an access log for all sites combined. How can I safely configure our server to output a combined/global Apache access log in a place accessible by a non-root SSH user? I am also concerned that this file could get quite large so I think I'd also need to know how to have it automatically shed old information. To make things more interesting I'm a programmer not a sys admin so not everything is immediately obvious to me.

    Read the article

  • Perl script and out of memory errors

    - by Kevin
    We have a midsized server with 48GB of RAM and are attempting to import a list of around 100,000 opt-in email subscribers to a new list management system written in Perl. From my understanding, Perl doesn't have imposed memory limits like PHP, and yet we are continuously getting internal server errors when attempting to do the import. When investigating the error logs, we see that the script ran out of memory. Since perl doesn't have a setting to limit the memory usage (as far as I can tell) why are we getting these errors? I doubt a small import like this is consuming 48GB of ram. We have compromised and split the list into chunks of 10,000, but would like to figure out the root cause for future fixes. This is a CentOS machine with Litespeed as the web server.

    Read the article

  • cannot add a user to sysadmin role in SQL Server

    - by George2
    I am using SQL Server 2008 Management Studio. The current logon account belongs to machine local administrator group. I am using Windows Integrated Security mode in SQL Server 2008. My issue is, after log into SQL Server Management Studio, I select my login name under Security/Logins, then select Server Roles Tab, then select the last item -- sysadmin to make myself belong to this group/role, but it says I do not have enough permission. Any ideas what is wrong? I think local administrator should be able to do anything. :-)

    Read the article

  • Unable to cast transparent proxy to type &lt;type&gt;

    - by Rick Strahl
    This is not the first time I've run into this wonderful error while creating new AppDomains in .NET and then trying to load types and access them across App Domains. In almost all cases the problem I've run into with this error the problem comes from the two AppDomains involved loading different copies of the same type. Unless the types match exactly and come exactly from the same assembly the typecast will fail. The most common scenario is that the types are loaded from different assemblies - as unlikely as that sounds. An Example of Failure To give some context, I'm working on some old code in Html Help Builder that creates a new AppDomain in order to parse assembly information for documentation purposes. I create a new AppDomain in order to load up an assembly process it and then immediately unload it along with the AppDomain. The AppDomain allows for unloading that otherwise wouldn't be possible as well as isolating my code from the assembly that's being loaded. The process to accomplish this is fairly established and I use it for lots of applications that use add-in like functionality - basically anywhere where code needs to be isolated and have the ability to be unloaded. My pattern for this is: Create a new AppDomain Load a Factory Class into the AppDomain Use the Factory Class to load additional types from the remote domain Here's the relevant code from my TypeParserFactory that creates a domain and then loads a specific type - TypeParser - that is accessed cross-AppDomain in the parent domain:public class TypeParserFactory : System.MarshalByRefObject,IDisposable { …/// <summary> /// TypeParser Factory method that loads the TypeParser /// object into a new AppDomain so it can be unloaded. /// Creates AppDomain and creates type. /// </summary> /// <returns></returns> public TypeParser CreateTypeParser() { if (!CreateAppDomain(null)) return null; /// Create the instance inside of the new AppDomain /// Note: remote domain uses local EXE's AppBasePath!!! TypeParser parser = null; try { Assembly assembly = Assembly.GetExecutingAssembly(); string assemblyPath = Assembly.GetExecutingAssembly().Location; parser = (TypeParser) this.LocalAppDomain.CreateInstanceFrom(assemblyPath, typeof(TypeParser).FullName).Unwrap(); } catch (Exception ex) { this.ErrorMessage = ex.GetBaseException().Message; return null; } return parser; } private bool CreateAppDomain(string lcAppDomain) { if (lcAppDomain == null) lcAppDomain = "wwReflection" + Guid.NewGuid().ToString().GetHashCode().ToString("x"); AppDomainSetup setup = new AppDomainSetup(); // *** Point at current directory setup.ApplicationBase = AppDomain.CurrentDomain.BaseDirectory; //setup.PrivateBinPath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "bin"); this.LocalAppDomain = AppDomain.CreateDomain(lcAppDomain,null,setup); // Need a custom resolver so we can load assembly from non current path AppDomain.CurrentDomain.AssemblyResolve += new ResolveEventHandler(CurrentDomain_AssemblyResolve); return true; } …} Note that the classes must be either [Serializable] (by value) or inherit from MarshalByRefObject in order to be accessible remotely. Here I need to call methods on the remote object so all classes are MarshalByRefObject. The specific problem code is the loading up a new type which points at an assembly that visible both in the current domain and the remote domain and then instantiates a type from it. This is the code in question:Assembly assembly = Assembly.GetExecutingAssembly(); string assemblyPath = Assembly.GetExecutingAssembly().Location; parser = (TypeParser) this.LocalAppDomain.CreateInstanceFrom(assemblyPath, typeof(TypeParser).FullName).Unwrap(); The last line of code is what blows up with the Unable to cast transparent proxy to type <type> error. Without the cast the code actually returns a TransparentProxy instance, but the cast is what blows up. In other words I AM in fact getting a TypeParser instance back but it can't be cast to the TypeParser type that is loaded in the current AppDomain. Finding the Problem To see what's going on I tried using the .NET 4.0 dynamic type on the result and lo and behold it worked with dynamic - the value returned is actually a TypeParser instance: Assembly assembly = Assembly.GetExecutingAssembly(); string assemblyPath = Assembly.GetExecutingAssembly().Location; object objparser = this.LocalAppDomain.CreateInstanceFrom(assemblyPath, typeof(TypeParser).FullName).Unwrap(); // dynamic works dynamic dynParser = objparser; string info = dynParser.GetVersionInfo(); // method call works // casting fails parser = (TypeParser)objparser; So clearly a TypeParser type is coming back, but nevertheless it's not the right one. Hmmm… mysterious.Another couple of tries reveal the problem however:// works dynamic dynParser = objparser; string info = dynParser.GetVersionInfo(); // method call works // c:\wwapps\wwhelp\wwReflection20.dll (Current Execution Folder) string info3 = typeof(TypeParser).Assembly.CodeBase; // c:\program files\vfp9\wwReflection20.dll (my COM client EXE's folder) string info4 = dynParser.GetType().Assembly.CodeBase; // fails parser = (TypeParser)objparser; As you can see the second value is coming from a totally different assembly. Note that this is even though I EXPLICITLY SPECIFIED an assembly path to load the assembly from! Instead .NET decided to load the assembly from the original ApplicationBase folder. Ouch! How I actually tracked this down was a little more tedious: I added a method like this to both the factory and the instance types and then compared notes:public string GetVersionInfo() { return ".NET Version: " + Environment.Version.ToString() + "\r\n" + "wwReflection Assembly: " + typeof(TypeParserFactory).Assembly.CodeBase.Replace("file:///", "").Replace("/", "\\") + "\r\n" + "Assembly Cur Dir: " + Directory.GetCurrentDirectory() + "\r\n" + "ApplicationBase: " + AppDomain.CurrentDomain.SetupInformation.ApplicationBase + "\r\n" + "App Domain: " + AppDomain.CurrentDomain.FriendlyName + "\r\n"; } For the factory I got: .NET Version: 4.0.30319.239wwReflection Assembly: c:\wwapps\wwhelp\bin\wwreflection20.dllAssembly Cur Dir: c:\wwapps\wwhelpApplicationBase: C:\Programs\vfp9\App Domain: wwReflection534cfa1f For the instance type I got: .NET Version: 4.0.30319.239wwReflection Assembly: C:\\Programs\\vfp9\wwreflection20.dllAssembly Cur Dir: c:\\wwapps\\wwhelpApplicationBase: C:\\Programs\\vfp9\App Domain: wwDotNetBridge_56006605 which clearly shows the problem. You can see that both are loading from different appDomains but the each is loading the assembly from a different location. Probably a better solution yet (for ANY kind of assembly loading problem) is to use the .NET Fusion Log Viewer to trace assembly loads.The Fusion viewer will show a load trace for each assembly loaded and where it's looking to find it. Here's what the viewer looks like: The last trace above that I found for the second wwReflection20 load (the one that is wonky) looks like this:*** Assembly Binder Log Entry (1/13/2012 @ 3:06:49 AM) *** The operation was successful. Bind result: hr = 0x0. The operation completed successfully. Assembly manager loaded from: C:\Windows\Microsoft.NET\Framework\V4.0.30319\clr.dll Running under executable c:\programs\vfp9\vfp9.exe --- A detailed error log follows. === Pre-bind state information === LOG: User = Ras\ricks LOG: DisplayName = wwReflection20, Version=4.61.0.0, Culture=neutral, PublicKeyToken=null (Fully-specified) LOG: Appbase = file:///C:/Programs/vfp9/ LOG: Initial PrivatePath = NULL LOG: Dynamic Base = NULL LOG: Cache Base = NULL LOG: AppName = vfp9.exe Calling assembly : (Unknown). === LOG: This bind starts in default load context. LOG: Using application configuration file: C:\Programs\vfp9\vfp9.exe.Config LOG: Using host configuration file: LOG: Using machine configuration file from C:\Windows\Microsoft.NET\Framework\V4.0.30319\config\machine.config. LOG: Policy not being applied to reference at this time (private, custom, partial, or location-based assembly bind). LOG: Attempting download of new URL file:///C:/Programs/vfp9/wwReflection20.DLL. LOG: Assembly download was successful. Attempting setup of file: C:\Programs\vfp9\wwReflection20.dll LOG: Entering run-from-source setup phase. LOG: Assembly Name is: wwReflection20, Version=4.61.0.0, Culture=neutral, PublicKeyToken=null LOG: Binding succeeds. Returns assembly from C:\Programs\vfp9\wwReflection20.dll. LOG: Assembly is loaded in default load context. WRN: The same assembly was loaded into multiple contexts of an application domain: WRN: Context: Default | Domain ID: 2 | Assembly Name: wwReflection20, Version=4.61.0.0, Culture=neutral, PublicKeyToken=null WRN: Context: LoadFrom | Domain ID: 2 | Assembly Name: wwReflection20, Version=4.61.0.0, Culture=neutral, PublicKeyToken=null WRN: This might lead to runtime failures. WRN: It is recommended to inspect your application on whether this is intentional or not. WRN: See whitepaper http://go.microsoft.com/fwlink/?LinkId=109270 for more information and common solutions to this issue. Notice that the fusion log clearly shows that the .NET loader makes no attempt to even load the assembly from the path I explicitly specified. Remember your Assembly Locations As mentioned earlier all failures I've seen like this ultimately resulted from different versions of the same type being available in the two AppDomains. At first sight that seems ridiculous - how could the types be different and why would you have multiple assemblies - but there are actually a number of scenarios where it's quite possible to have multiple copies of the same assembly floating around in multiple places. If you're hosting different environments (like hosting the Razor Engine, or ASP.NET Runtime for example) it's common to create a private BIN folder and it's important to make sure that there's no overlap of assemblies. In my case of Html Help Builder the problem started because I'm using COM interop to access the .NET assembly and the above code. COM Interop has very specific requirements on where assemblies can be found and because I was mucking around with the loader code today, I ended up moving assemblies around to a new location for explicit loading. The explicit load works in the main AppDomain, but failed in the remote domain as I showed. The solution here was simple enough: Delete the extraneous assembly which was left around by accident. Not a common problem, but one that when it bites is pretty nasty to figure out because it seems so unlikely that types wouldn't match. I know I've run into this a few times and writing this down hopefully will make me remember in the future rather than poking around again for an hour trying to debug the issue as I did today. Hopefully it'll save some of you some time as well in the future.© Rick Strahl, West Wind Technologies, 2005-2012Posted in .NET  COM   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • links for 2010-12-15

    - by Bob Rhubart
    Pravin Janardanam: Security in OBIEE 11g, Part 1 Guest blogger Pravin Janardanam kicks off a two-part series in which he tackles the differences in security between OBIEE 11g and 10g, and provides some hints on security migration from a 10g environment. (tags: oracle otn businessintelligence obiee) HttpClusterServlet Configuration (Weblogic Server Acting as a Proxy) Quick tips from Divay Dureja. (tags: oracle weblogic servlet configuration) Accelerating Deployment of Virtualized Infrastructures with the Oracle VM Blade Cluster Reference Configuration "The Oracle VM blade cluster reference configuration is a single-vendor solution that addresses every layer of the virtualization stack with Oracle hardware and software components." - from the white paper. (tags: oracle otn oraclevm virtualization) A SOA Safari (Antony Reynolds' Blog) SOA author Antony Reynolds shares links to some of his favorite SOA titles available for reading on Safari. (tags: oracle otn soa) Using Crossbow and Solaris 11 Express Zones for a single machine proof of concept environment with Puppet "My last blog entry was about my debugging experience with Puppet and promise to share the setup that I used. I now follow up that previous entry with this one which describes my Crossbow + NAT + S11 Zones proof of concept." - Michael Tin (tags: oracle solaris crossbow) @myfear: One thing you did not know about Java EE class loading in GlassFish 2.x "Be careful migrating apps from one app server to the other. And don't expect to have a strong hierarchical class loader in place. That is especially true for GF 2.x class loading." Oracle ACE Director Markus Eisele (tags: oracle otn oracleace java glassfish weblogic)

    Read the article

  • Mac OSX Server 10.6 DNS Issues

    - by dallasclark
    Hi, The server was upgraded from 10.5 from 10.6, during the upgrade the Reverse Zones were lost so I tried to recreate these but found that it's best to delete all zones, definitions and start again. So I've started again and Reverse Zones are appearing but I'm still having issues. I receive the following errors (if they help) 01-Nov-2010 12:52:01.254 client 192.168.1.52#57051: view com.apple.ServerAdmin.DNS.public: query (cache) 'server.dev.home.gateway/A/IN' denied 01-Nov-2010 12:59:24.487 client 192.168.1.52#52858: view com.apple.ServerAdmin.DNS.public: query (cache) 'earth.server.dev.home.gateway/A/IN' denied At the moment I have the following setup in the DNS 1.168.192.in-addr.arpa. Reverse Zone 192.168.1.100 Reverse Mapping MacPro-Server.local. server.dev. Primary Zone server.dev. Machine 192.168.1.100 earth.server.dev. Alias server.dev.

    Read the article

  • udev: waiting for uevents to be processed on my Gentoo

    - by stan31337
    During the startup I see machine executing this thing for about 30 seconds: udev: waiting for uevents to be processed Then I get a quick message which says something like: devfs: timeout (50 seconds) I can't see the whole thing because after that system starts up very fast including Xfce. What logs and configs do I need to provide for further investigation? $uname -a Linux genta 3.6.6-gentoo #1 SMP Sun Nov 11 11:02:23 NOVT 2012 i686 Genuine Intel(R) CPU T2300 @ 1.66GHz GenuineIntel GNU/Linux Thank you! UPD: rc-status genta / # rc-status sysinit Runlevel: sysinit dmesg [ started ] udev [ started ] devfs [ started ] genta / # rc-status boot Runlevel: boot hwclock [ started ] modules [ started ] fsck [ started ] root [ started ] mtab [ started ] localmount [ started ] sysctl [ started ] bootmisc [ started ] hostname [ started ] termencoding [ started ] keymaps [ started ] net.lo [ started ] swap [ started ] urandom [ started ] procfs [ started ]

    Read the article

  • Error installing Arch Linux

    - by Garethj94
    So, I am trying to install Arch Linux on my Acer Aspire 4830tg, but I keep running into problems. Some background knowledge, I am trying to install Arch off a usb stick and I got the iso image using bittorrent. I am also trying to install it alongside of Windows 8 (which is already installed). So when I boot into Arch linux I get this error: :: Mounting '/dev/disk/by-label/ARCH_201212' to 'run/archiso/bootmnt' Waiting 30 seconds for device /dev/disk/by-label/ARCH_201212 ... ERROR: '/dev/disk/by-label/ARCH_201212' device did not show up after 30 seconds... Falling back to interactive prompt You can try to fix the problem manually, log out when you are finished sh: can't access tty; job control turned off I know that it will work if I run it on a virtual machine but whenever I try to install it on my laptop I keep getting this error. And since you can't register for the Arch forums without a Arch terminal to run their captcha command I can't ask this on their forums.

    Read the article

< Previous Page | 459 460 461 462 463 464 465 466 467 468 469 470  | Next Page >