Search Results

Search found 9963 results on 399 pages for 'special commands'.

Page 361/399 | < Previous Page | 357 358 359 360 361 362 363 364 365 366 367 368  | Next Page >

  • ASP.NET: Including JavaScript libraries conditionally from CDN

    - by DigiMortal
    When developing cloud applications it is still useful to build them so they can run also on local machine without network connection. One thing you use from CDN when in cloud and from app folder when not connected are common JavaScript libraries. In this posting I will show you how to add support for local and CDN script stores to your ASP.NET MVC web application. Our solution is simple. We will add new configuration setting to our web.config file (including cloud transform file of it) and new property to our web application. In master page where scripts are included we will include scripts from CDN conditionally. There is nothing complex, all changes we make are simple ones. 1. Adding new property to web application Although I am using ASP.NET MVC web application these modifications work also very well with ASP.NET Forms. Open Global.asax and add new static property to your application class. public static bool UseCdn {     get     {         var valueString = ConfigurationManager.AppSettings["useCdn"];         bool useCdn;           bool.TryParse(valueString, out useCdn);         return useCdn;     } } If you want less round-trips to configuration data you can keep some nullable boolean in your application class scope and load CDN setting first time it is asked. 2. Adding new configuration setting to web.config By default my application uses local scripts. Although my application runs on cloud I can do a lot of stuff without staging environment in cloud. So by default I don’t have costs on traffic when including scripts from application folders. <appSettings>   <add key="UseCdn" value="false" /> </appSettings> You can also set UseCdn value to true and change it to false when you are not connected to network. 3. Modifying web.config cloud transform I have special configuration for my solution that I use when deploying my web application to cloud. This configuration is called Cloud and transform for this configuration is located in web.cloud.config. To make application using CDN when deployed to cloud we need the following transform. <appSettings>   <add key="UseCdn"        value="true"        xdt:Transform="SetAttributes"        xdt:Locator="Match(key)" /> </appSettings> Now when you publish your application to cloud it uses CDN by default. 4. Including scripts in master pages The last thing we need to change is our master page. My solution is simple. I check if I have to include scripts from CDN and if it is true then I include scripts from there. Otherwise my scripts will be included from application folder. @if (MyWeb.MvcApplication.UseCdn) {     <script src="http://ajax.microsoft.com/ajax/jquery/jquery-1.4.4.min.js" type="text/javascript"></script> } else {     <script src="@Url.Content("~/Scripts/jquery-1.4.4.min.js")" type="text/javascript"></script> } Although here is only one script shown you can add all your scripts that are also available in some CDN in this if-else block. You are free to include scripts from different CDN services if you need. Conclusion As we saw it was very easy to modify our application to make it use CDN for JavaScript libraries in cloud and local scripts when run on local machine. We made only small changes to our application code, configuration and master pages to get different script sources supported. Our application is now more independent from external sources when we are working on it.

    Read the article

  • Data Source Security Part 2

    - by Steve Felts
    In Part 1, I introduced the default security behavior and listed the various options available to change that behavior.  One of the key topics to understand is the difference between directly using database user and password values versus mapping from WLS user and password to the associated database values.   The direct use of database credentials is relatively new to WLS, based on customer feedback.  Some of the trade-offs are covered in this article. Credential Mapping vs. Database Credentials Each WLS data source has a credential map that is a mechanism used to map a key, in this case a WLS user, to security credentials (user and password).  By default, when a user and password are specified when getting a connection, they are treated as credentials for a WLS user, validated, and are converted to a database user and password using a credential map associated with the data source.  If a matching entry is not found in the credential map for the data source, then the user and password associated with the data source definition are used.  Because of this defaulting mechanism, you should be careful what permissions are granted to the default user.  Alternatively, you can define an invalid default user to ensure that no one can accidentally get through (in this case, you would need to set the initial capacity for the pool to zero so that the pool is populated only by valid users). To create an entry in the credential map: 1) First create a WLS user.  In the administration console, go to Security realms, select your realm (e.g., myrealm), select Users, and select New.  2) Second, create the mapping.  In the administration console, go to Services, select Data sources, select your data source name, select Security, select Credentials, and select New.  See http://docs.oracle.com/cd/E24329_01/apirefs.1211/e24401/taskhelp/jdbc/jdbc_datasources/ConfigureCredentialMappingForADataSource.html for more information. The advantages of using the credential mapping are that: 1) You don’t hard-code the database user/password into a program or need to prompt for it in addition to the WLS user/password and 2) It provides a layer of abstraction between WLS security and database settings such that many WLS identities can be mapped to a smaller set of DB identities, thereby only requiring middle-tier configuration updates when WLS users are added/removed. You can cut down the number of users that have access to a data source to reduce the user maintenance overhead.  For example, suppose that a servlet has the one pre-defined, special WLS user/password for data source access, hard-wired in its code in a getConnection(user, password) call.  Every WebLogic user can reap the specific DBMS access coded into the servlet, but none has to have general access to the data source.  For instance, there may be a ‘Sales’ DBMS which needs to be protected from unauthorized eyes, but it contains some day-to-day data that everyone needs. The Sales data source is configured with restricted access and a servlet is built that hard-wires the specific data source access credentials in its connection request.  It uses that connection to deliver only the generally needed day-to-day information to any caller. The servlet cannot reveal any other data, and no WebLogic user can get any other access to the data source.  This is the approach that many large applications take and is the reasoning behind the default mapping behavior in WLS. The disadvantages of using the credential map are that: 1) It is difficult to manage (create, update, delete) with a large number of users; it is possible to use WLST scripts or a custom JMX client utility to manage credential map entries. 2) You can’t share a credential map between data sources so they must be duplicated. Some applications prefer not to use the credential map.  Instead, the credentials passed to getConnection(user, password) should be treated as database credentials and used to authenticate with the database for the connection, avoiding going through the credential map.  This is enabled by setting the “use-database-credentials” to true.  See http://docs.oracle.com/cd/E24329_01/apirefs.1211/e24401/taskhelp/jdbc/jdbc_datasources/ConfigureOracleParameters.html "Configure Oracle parameters" in Oracle WebLogic Server Administration Console Help. Use Database Credentials is not currently supported for Multi Data Source configurations.  When enabled, it turns off credential mapping on Generic and Active GridLink data sources for the following attributes: 1. identity-based-connection-pooling-enabled (this interaction is available by patch in 10.3.6.0). 2. oracle-proxy-session (this interaction is first available in 10.3.6.0). 3. set client identifier (this interaction is available by patch in 10.3.6.0).  Note that in the data source schema, the set client identifier feature is poorly named “credential-mapping-enabled”.  The documentation and the console refer to it as Set Client Identifier. To review the behavior of credential mapping and using database credentials: - If using the credential map, there needs to be a mapping for each WLS user to database user for those users that will have access to the database; otherwise the default user for the data source will be used.  If you always specify a user/password when getting a connection, you only need credential map entries for those specific users. - If using database credentials without specifying a user/password, the default user and password in the data source descriptor are always used.  If you specify a user/password when getting a connection, that user will be used for the credentials.  WLS users are not involved at all in the data source connection process.

    Read the article

  • Encrypt images before uploading to Dropbox [migrated]

    - by Cherry
    I want to encrypt a file first before the file will be uploaded to the dropbox. So i have implement the encryption inside the uploading of the codes. However, there is an error after i integrate the codes together. Where did my mistake go wrong? Error at putFileOverwriteRequest and it says The method putFileOverwriteRequest(String, InputStream, long, ProgressListener) in the type DropboxAPI is not applicable for the arguments (String, FileOutputStream, long, new ProgressListener(){}) Another problem is that this FileOutputStream fis = new FileOutputStream(new File("dont know what to put in this field")); i do not know where to put the file so that after i read the file, it will call the path and then upload to the Dropbox. Anyone is kind to help me in this? As time is running out for me and i still cant solve the problem. Thank you in advance. The full code is as below. public class UploadPicture extends AsyncTask<Void, Long, Boolean> { private DropboxAPI<?> mApi; private String mPath; private File mFile; private long mFileLen; private UploadRequest mRequest; private Context mContext; private final ProgressDialog mDialog; private String mErrorMsg; public UploadPicture(Context context, DropboxAPI<?> api, String dropboxPath, File file) { // We set the context this way so we don't accidentally leak activities mContext = context.getApplicationContext(); mFileLen = file.length(); mApi = api; mPath = dropboxPath; mFile = file; mDialog = new ProgressDialog(context); mDialog.setMax(100); mDialog.setMessage("Uploading " + file.getName()); mDialog.setProgressStyle(ProgressDialog.STYLE_HORIZONTAL); mDialog.setProgress(0); mDialog.setButton("Cancel", new OnClickListener() { public void onClick(DialogInterface dialog, int which) { // This will cancel the putFile operation mRequest.abort(); } }); mDialog.show(); } @Override protected Boolean doInBackground(Void... params) { try { KeyGenerator keygen = KeyGenerator.getInstance("DES"); SecretKey key = keygen.generateKey(); //generate key //encrypt file here first byte[] plainData; byte[] encryptedData; Cipher cipher = Cipher.getInstance("DES/ECB/PKCS5Padding"); cipher.init(Cipher.ENCRYPT_MODE, key); //File f = new File(mFile); //read file FileInputStream in = new FileInputStream(mFile); //obtains input bytes from a file plainData = new byte[(int)mFile.length()]; in.read(plainData); //Read bytes of data into an array of bytes encryptedData = cipher.doFinal(plainData); //encrypt data FileOutputStream fis = new FileOutputStream(new File("dont know what to put in this field")); //upload to a path first then call the path so that it can be uploaded up to the dropbox //save encrypted file to dropbox // By creating a request, we get a handle to the putFile operation, // so we can cancel it later if we want to //FileInputStream fis = new FileInputStream(mFile); String path = mPath + mFile.getName(); mRequest = mApi.putFileOverwriteRequest(path, fis, mFile.length(), new ProgressListener() { @Override public long progressInterval() { // Update the progress bar every half-second or so return 500; } @Override public void onProgress(long bytes, long total) { publishProgress(bytes); } }); if (mRequest != null) { mRequest.upload(); return true; } } catch (DropboxUnlinkedException e) { // This session wasn't authenticated properly or user unlinked mErrorMsg = "This app wasn't authenticated properly."; } catch (DropboxFileSizeException e) { // File size too big to upload via the API mErrorMsg = "This file is too big to upload"; } catch (DropboxPartialFileException e) { // We canceled the operation mErrorMsg = "Upload canceled"; } catch (DropboxServerException e) { // Server-side exception. These are examples of what could happen, // but we don't do anything special with them here. if (e.error == DropboxServerException._401_UNAUTHORIZED) { // Unauthorized, so we should unlink them. You may want to // automatically log the user out in this case. } else if (e.error == DropboxServerException._403_FORBIDDEN) { // Not allowed to access this } else if (e.error == DropboxServerException._404_NOT_FOUND) { // path not found (or if it was the thumbnail, can't be // thumbnailed) } else if (e.error == DropboxServerException._507_INSUFFICIENT_STORAGE) { // user is over quota } else { // Something else } // This gets the Dropbox error, translated into the user's language mErrorMsg = e.body.userError; if (mErrorMsg == null) { mErrorMsg = e.body.error; } } catch (DropboxIOException e) { // Happens all the time, probably want to retry automatically. mErrorMsg = "Network error. Try again."; } catch (DropboxParseException e) { // Probably due to Dropbox server restarting, should retry mErrorMsg = "Dropbox error. Try again."; } catch (DropboxException e) { // Unknown error mErrorMsg = "Unknown error. Try again."; } catch (FileNotFoundException e) { } return false; } @Override protected void onProgressUpdate(Long... progress) { int percent = (int)(100.0*(double)progress[0]/mFileLen + 0.5); mDialog.setProgress(percent); } @Override protected void onPostExecute(Boolean result) { mDialog.dismiss(); if (result) { showToast("Image successfully uploaded"); } else { showToast(mErrorMsg); } } private void showToast(String msg) { Toast error = Toast.makeText(mContext, msg, Toast.LENGTH_LONG); error.show(); } }

    Read the article

  • how to run mysql drop and create synonym in shell script

    - by bgrif
    I have added this command to a script I am writing and I am running into a issue with it not logging onto mysql and running the commands. How can i fix this and make it run. #! /bin/bash Subject: Please stage the following TFL09143 Locator Bulletin to all TF90 staging environments: # This next section is to go to mysql server and make changes. you can drop and create synonyms truncate a table and insert into a different one. you will be able to verify the counts to the different locations # $ mysql --host=app03-bsi --u "" --p "" "TF90BPS" -bse "drop synonym TF90.BTXADDR && drop synonym TF90.BTXSUPB && CREATE SYNONYM TF90.BTXADDR FOR TF90BP.TFBPS2.BTXADDR && CREATE SYNONYM TF90.BTXSUPB FOR TF90BP.TFBPS3.BTXSUPB && TRUNCATE TABLE TF90BP.TFBPS3.BTXSUPB SELECT * FROM TF90BP.TFBPS2.BTXSUPB; select count () from TF90BP.TF90.BTXADDR select count() from TF90BPS.TF90.BTXADDR; select count() from TF90BP.TF90.BTXSUPB; select count() from TF90BPS.TF90.BTXSUPB;" $ mysql --host=app03-bsi --u "" --p "" "TF90LMS" -bse "drop synonym TF90.BTXADDR && drop synonym TF90.BTXSUPB && CREATE SYNONYM TF90.BTXADDR FOR TF90LM.TFBPS2.BTXADDR && CREATE SYNONYM TF90.BTXSUPB FOR TF90LM.TFBPS3.BTXSUPB; TRUNCATE TABLE TF90LM.TFLMS2.BTXADDR;TRUNCATE TABLE TF90LM.TFLMS3.BTXSUPB;INSERT INTO TF90LM.TFLMS3.BTXSUPB SELECT * FROM TF90LM.TFLMS2.BTXSUPB;Verify select count() from TF90LM.TF90.BTXADDR;select count() from TF90LMS.TF90.BTXADDR;select count() from TF90LM.TF90.BTXSUPB;select count() from TF90LMS.TF90.BTXSUPB" $ mysql --host=app03-bsi --u "" --p "" "TF90NCS" -bse "drop synonym TF90.BTXADDR && drop synonym TF90.BTXSUPB && CREATE SYNONYM TF90.BTXADDR FOR TF90NC.TFBPS2.BTXADDR && CREATE SYNONYM TF90.BTXSUPB FOR TF90NC.TFBPS3.BTXSUPB; TRUNCATE TABLE TF90NC.TFNCS2.BTXADDR; TRUNCATE TABLE TF90NC.TFNCS3.BTXSUPB; INSERT INTO TF90NC.TFNCS3.BTXSUPB SELECT * FROM TF90NC.TFNCS2.BTXSUPB; Verify select count() from TF90NC.TF90.BTXADDR; select count() from TF90NCS.TF90.BTXADDR;select count() from TF90NC.TF90.BTXSUPB;select count() from TF90NCS.TF90.BTXSUPB" $ mysql --host=app03-bsi --u "" --p "" "TF90PVS" -bse "drop synonym TF90.BTXADDR && drop synonym TF90.BTXSUPB && CREATE SYNONYM TF90.BTXADDR FOR TF90PV.TFBPS2.BTXADDR && CREATE SYNONYM TF90.BTXSUPB FOR TF90PV.TFBPS3.BTXSUPB; TRUNCATE TABLE TF90PV.TFPVS2.BTXADDR;TRUNCATE TABLE TF90PV.TFPVS3.BTXSUPB;INSERT INTO TF90PV.TFPVS3.BTXSUPB SELECT * FROM TF90PV.TFPVS2.BTXSUPB;Verify select count() from TF90PV.TF90.BTXADDR;select count() from TF90PVS.TF90.BTXADDR;select count() from TF90PV.TF90.BTXSUPB;select count() from TF90PVS.TF90.BTXSUPB" TFL09143 Staging cd \ntsrv\common\To\IT-CERT-TEST\TFL09143 #change to mapped network drive cp -p TFL09143.pkg /d:/tf90/code_stg && /tf90bp/code_stg && /tf90lm/code_stg && /tf90pv/code_stg # Copies the package from the networked folder and then copies to the location(s) needed.# InvalidInput="true" if [ $# -eq 0 ] ; then echo "This script sets up TF90 Staging" echo -n "Which production do you want to run? (RB/TaxLocator/Cyclic)" read ProductionDistro else ProductionDistro="$1" fi while [ "$InvalidInput" = "true" ] do if [ "$ProductionDistro" = "RB" -o "$ProductionDistro" = "TaxLocator" -o "$ProductionDistro" = "Cyclic" ] ; then InvalidInput="false" break else echo "You have entered an error" echo "You must type RB or TaxLocator or Cyclic" echo "you typed $ProductionDistro" echo "This script sets up TF90 Staging" read ProductionDistro fi done InvalidInput="true" if [ $# -eq 0 ] ; then echo "This script sets up RB TF90 Staging" echo -n "Which Element do you want to run? (TF90/TF90BP/TF90LM/TF90PV/ALL)" read ElementDistro else ElementDistro="$1" fi while [ "$InvalidInput" = "true" ] do if [ "$ElementDistro" = "TF90" -o "$ElementDistro" = "TF90BP" -o "$ElementDistro" = "TF90LM" -o "$ElementDistro" = "TF90PV" -o "$ElementDistro" = "ALL" ] ; then InvalidInput="false" break else echo "You have entered an error" echo "You must type TF90 or TF90BP or TF90LM or TF90PV" echo "you typed $ElementDistro" echo "This script sets up TF90 Staging" read ElementDistro fi done if [ "$ElementDistro" = "TF90" ] ; then cd /d/tf90/code_stg vim TFL09143.pkg export var=TF90_CONNECT_STRING=DSN=TF90NCS;export Description=TF90NCS;export Trusted_Connection=Yes;export WSID=APP03- BSI;export DATABASE=TF90NCS; export DATASET=DEFAULT pkgintall -l -v ../TFL09143.pkg fi if [ "$ElementDistro" = "$TF90BP" ] ; then cd /d/tf90bp/code_stg vim TFL09143.pkg export TF90_CONNECT_STRING=DSN=TF90BPS;export Description=TF90BPS;export Trusted_Connection=Yes;export WSID=APP03- BSI;export DATABASE=TF90BPS; start tfloader -l –v ../TFL09143.pkg fi if [ "$ElementDistro" = "$TF90LM" ] ; then cd /d/tf90lm/code_stg vim TFL09143.pkg export TF90_CONNECT_STRING=DSN=TF90LMS;export Description=TF90LMS;export Trusted_Connection=Yes;export WSID=APP03- BSI;export DATABASE=TF90LMS; start tfloader -l -v ../TFL09143.pkg fi if [ "$ElementDistro" = "TF90PV" ] ; then cd /d/tf90pv/code_stg vim TFL09143.pkg export TF90_CONNECT_STRING=DSN=TF90PVS;Description=TF90PVS;Trusted_Connection=Yes;WSID=APP03- BSI;DATABASE=TF90PVS; start tfloader -l –v ../TFL09143.pkg fi exit 0

    Read the article

  • Part 4 of 4 : Tips/Tricks for Silverlight Developers.

    - by mbcrump
    Part 1 | Part 2 | Part 3 | Part 4 I wanted to create a series of blog post that gets right to the point and is aimed specifically at Silverlight Developers. The most important things I want this series to answer is : What is it?  Why do I care? How do I do it? I hope that you enjoy this series. Let’s get started: Tip/Trick #16) What is it? Find out version information about Silverlight and which WebKit it is using by going to http://issilverlightinstalled.com/scriptverify/. Why do I care? I’ve had those users that its just easier to give them a site and say copy/paste the line that says User Agent in order to troubleshoot a Silverlight problem. I’ve also been debugging my own Silverlight applications and needed an easy way to determine if the plugin is disabled or not. How do I do it: Simply navigate to http://issilverlightinstalled.com/scriptverify/ and hit the Verify button. An example screenshot is located below: Results from Chrome 7 Results from Internet Explorer 8 (With Silverlight Disabled) Tip/Trick #17) What is it? Use Lambdas whenever you can. Why do I care?  It is my personal opinion that code is easier to read using Lambdas after you get past the syntax. How do I do it: For example: You may write code like the following: void MainPage_Loaded(object sender, RoutedEventArgs e) { //Check and see if we have a newer .XAP file on the server Application.Current.CheckAndDownloadUpdateAsync(); Application.Current.CheckAndDownloadUpdateCompleted += new CheckAndDownloadUpdateCompletedEventHandler(Current_CheckAndDownloadUpdateCompleted); } void Current_CheckAndDownloadUpdateCompleted(object sender, CheckAndDownloadUpdateCompletedEventArgs e) { if (e.UpdateAvailable) { MessageBox.Show( "An update has been installed. To see the updates please exit and restart the application"); } } To me this style forces me to look for the other Method to see what the code is actually doing. The style located below is much easier to read in my opinion and does the exact same thing. void MainPage_Loaded(object sender, RoutedEventArgs e) { //Check and see if we have a newer .XAP file on the server Application.Current.CheckAndDownloadUpdateAsync(); Application.Current.CheckAndDownloadUpdateCompleted += (s, e) => { if (e.UpdateAvailable) { MessageBox.Show( "An update has been installed. To see the updates please exit and restart the application"); } }; } Tip/Trick #18) What is it? Prevent development Web Service references from breaking when Visual Studio auto generates a new port number. Why do I care?  We have all been there, we are developing a Silverlight Application and all of a sudden our development web services break. We check and find out that the local port number that Visual Studio assigned has changed and now we need up to update all of our service references. We need a way to stop this. How do I do it: This can actually be prevented with just a few mouse click. Right click on your web solution and goto properties. Click the tab that says, Web. You just need to click the radio button and specify a port number. Now you won’t be bothered with that anymore. Tip/Trick #19) What is it? You can disable the Close Button a ChildWindow. Why do I care?  I wouldn’t blog about it if I hadn’t seen it. Devs trying to override keystrokes to prevent users from closing a Child Window. How do I do it: A property exist on the ChildWindow called “HasCloseButton”, you simply change that to false and your close button is gone. You can delete the “Cancel” button and add some logic to the OK button if you want the user to respond before proceeding. Tip/Trick #20) What is it? Cleanup your XAML. Why do I care?  By removing unneeded namespaces, not naming all of your controls and getting rid of designer markup you can improve code quality and readability. How do I do it: (This is a 3 in one tip) Remove unused Designer markup: 1) Have you ever wondered what the following code snippet does? xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="d" d:DesignWidth="640" d:DesignHeight="480" This code is telling the designer to do something special with this page in “Design mode” Specifically the width and the height of the page. When its running in the browser it will not use this information and it is actually ignored by the XAML parser. In other words, if you don’t need it then delete it. 2) If you are not using a namespace then remove it. In the code sample below, I am using Resharper which will tell me the ones that I’m not using by the grayed out line below. If you don’t have resharper you can look in your XAML and manually remove the unneeded namespaces. 3) Don’t name an control unless you actually need to refer to it in procedural code. If you name a control you will take a slight performance hit that is totally unnecessary if its not being called. <TextBlock Height="23" Text="TextBlock" />   That is the end of the series. I hope that you enjoyed it and please check out Part 1 | Part 2 | Part 3 if your hungry for more.  Subscribe to my feed CodeProject

    Read the article

  • Can ping device from one computer and not the other

    - by Sean Duggan
    I've recently been assigned to work on a diagnostic program done in C++ which communicates with a piece of electronic equipment. Our normal scenario involves communicating via an RS232 interface, but I've been asked to make our program work over ethernet, source code having been done in Visual Basic. After much thrashing about trying to get the code to work and continuing to get 10049 Winsock errors when I tried to connect, I tried pinging the switch. From the computer the VB program is running on, I can see the switch via ping, nslookup, tracert, and pathping (I was going down the list of programs) and I can do this via URI or IP address. From my laptop, sending the same commands fails every time. They're both using the same network cable and the same USB-to-Ethernet device (I've been swapping them between tests) but one can see the switch and the other cannot. I'm working on the programming end, but the ping results makes me think that there might be a network issue stymieing me. wry grin I'm not much of a network guy, so I'm appealing to expert assistance. Both computers are running Windows XP if that helps. The connection is to an "IP-RS8" device which then connects to our VCU-C units. Each unit is accessible via URI or IP address on the desktop computer we usually have connected to the units (it's running the older VB program that I was asked to lift the networking code from). The connection is made via a USB-to-Ethernet adapter so as to leave the regular Ethernet port available for connecting to the company network. Hmm... come to think of it, I've probably been confusing the issue, talking about pinging "the switch" rather than indicating that it's the devices. My apologies. Communication is generally done with a DLL that uses Winsock functions to make queries for data from the VCU and then to receive. I'm failing when connecting. I haven't found anything on the firewall which should block these commands, but I'll keep poking. I don't know if it's potentially relevant, but on the desktop, the adapter maps to Local Area Connection 3 while on the laptop, it consistently maps to Local Area Connection 2. Currently reading up on DHCP. IPConfig /all results: Desktop Host Name . . . . . . . . . . . . : AMERDAEXXXXXX Primary Dns Suffix . . . . . . . : amer.example.com Node Type . . . . . . . . . . . . : Hybrid IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No DNS Suffix Search List. . . . . . : COMPANY.com amer.example.com atle.example.com cone.example.com apac.example.com scan.example.com bYX.example.com Ethernet adapter Local Area Connection X: Connection-specific DNS Suffix . : amer.example.com Description . . . . . . . . . . . : Broadcom NetXtreme XYxx Gigabit Controller Physical Address. . . . . . . . . : YY-XX-YB-XX-XX-XX Dhcp Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes IP Address. . . . . . . . . . . . : XYY.XXX.XY.XXX Subnet Mask . . . . . . . . . . . : XXX.XXX.XXY.Y Default Gateway . . . . . . . . . : XYY.XXX.XY.X DHCP Server . . . . . . . . . . . : XY.XXX.XXY.XX DNS Servers . . . . . . . . . . . : XY.XXX.XXY.XX XY.XXY.XXY.XX Primary WINS Server . . . . . . . : XY.XXX.XXY.X Secondary WINS Server . . . . . . : XY.XXY.XXY.X Lease Obtained. . . . . . . . . . : Thursday, July XX, XYXX XY:XX:XX AM Lease Expires . . . . . . . . . . : Sunday, July XX, XYXX XY:XX:XX AM Ethernet adapter Local Area Connection X: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : ASIX axYYYYX USBX.Y to Fast Ethernet Adapter Physical Address. . . . . . . . . : YY-XY-BY-YX-XY-AY Dhcp Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes IP Address. . . . . . . . . . . . : XY.Y.Y.X Subnet Mask . . . . . . . . . . . : XXX.XXX.XXY.Y Default Gateway . . . . . . . . . : XY.Y.Y.X DHCP Server . . . . . . . . . . . : XY.Y.Y.XY DNS Servers . . . . . . . . . . . : XY.Y.Y.X Lease Obtained. . . . . . . . . . : Thursday, July XX, XYXX XY:XX:XY AM Lease Expires . . . . . . . . . . : Tuesday, August YX, XYXX XX:XY:XY AM Laptop Windows IP Configuration Host Name . . . . . . . . . . . . : AMERLAFYYXXYX Primary Dns Suffix . . . . . . . : amer.example.com Node Type . . . . . . . . . . . . : Hybrid IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No DNS Suffix Search List. . . . . . : COMPANY.com amer.example.com atle.example.com cone.example.com apac.example.com scan.example.com bYX.example.com Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : amer.example.com Description . . . . . . . . . . . : Intel(R) 82567LM Gigabit Network Connection Physical Address. . . . . . . . . : YY-XY-BY-DY-XB-YX Dhcp Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes IP Address. . . . . . . . . . . . : XYY.XXX.XY.XY Subnet Mask . . . . . . . . . . . : XXX.XXX.XXY.Y Default Gateway . . . . . . . . . : XYY.XXX.XY.X DHCP Server . . . . . . . . . . . : XY.XXX.XXY.XX DNS Servers . . . . . . . . . . . : XY.XXX.XXY.XX XY.XXY.XXY.XX Primary WINS Server . . . . . . . : XY.XXX.XXY.X Secondary WINS Server . . . . . . : XY.XXY.XXY.X Lease Obtained. . . . . . . . . . : Thursday, July XX, XYXX XX:XX:XX AM Lease Expires . . . . . . . . . . : Sunday, July XX, XYXX XX:XX:XX AM Ethernet adapter {XYXAAYXX-YEDY-XXYX-YYEX-BYXYXXYEEYEX}: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Nortel IPSECSHM Adapter - Packet Scheduler iniport Physical Address. . . . . . . . . : XX-XX-XX-XX-XX-YY Dhcp Enabled. . . . . . . . . . . : No IP Address. . . . . . . . . . . . : Y.Y.Y.Y Subnet Mask . . . . . . . . . . . : Y.Y.Y.Y Default Gateway . . . . . . . . . : Ethernet adapter Leaf Networks Adapter: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Leaf Networks Adapter Physical Address. . . . . . . . . : YY-FF-FA-BC-YF-AY Dhcp Enabled. . . . . . . . . . . : No IP Address. . . . . . . . . . . . : X.XYY.XY.XX Subnet Mask . . . . . . . . . . . : XXX.Y.Y.Y Default Gateway . . . . . . . . . : Ethernet adapter Local Area Connection 3: Media State . . . . . . . . . . . : Media disconnected Description . . . . . . . . . . . : Bluetooth LAN Access Server Driver Physical Address. . . . . . . . . : YY-FX-AX-YA-BY-CA Ethernet adapter Wireless Network Connection 2: Media State . . . . . . . . . . . : Media disconnected Description . . . . . . . . . . . : Intel(R) WiFi Link 5300 AGN Physical Address. . . . . . . . . : YY-XX-YA-CX-FC-YE Ethernet adapter Local Area Connection 2: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : ASIX ax88772 USB2.0 to Fast Ethernet Adapter Physical Address. . . . . . . . . : YY-XY-BY-YX-XY-AY Dhcp Enabled. . . . . . . . . . . : No IP Address. . . . . . . . . . . . : XYX.XYY.X.X Subnet Mask . . . . . . . . . . . : XXX.XXX.XXX.Y Default Gateway . . . . . . . . . :

    Read the article

  • Run CGI in IIS 7 to work with GET without Requiring POST Request

    - by Mohamed Meligy
    I'm trying to migrate an old CGI application from an existing Windows 2003 server (IIS 6.0) where it works just fine to a new Windows 2008 server with IIS 7.0 where we're getting the following problem: After setting up the module handler and everything, I find that I can only access the CGI application (rdbweb.exe) file if I'm calling it via POST request (form submit from another page). If I just try to type in the URL of the file (issuing a GET request) I get the following error: HTTP Error 502.2 - Bad Gateway The specified CGI application misbehaved by not returning a complete set of HTTP headers. The headers it did return are "Exception EInOutError in module rdbweb.exe at 00039B44. I/O error 6. ". This is a very old application for one of our clients. When we tried to call the vendor they said we need to pay ~ $3000 annual support fee in order to start the talk about it. Of course I'm trying to avoid that! Note that: If we create a normal HTML form that submits to "rdbweb.exe", we get the CGI working normally. We can't use this as workaround though because some pages in the application link to "rdbweb.exe" with normal link not form submit. If we run "rdbweb.exe". from a Console (Command Prompt) Window not IIS, we get the normal HTML we'd typically expect, no problem. We have tried the following: Ensuring the CGI module mapped to "rdbweb.exe".in IIS has all permissions (read, write, execute) enabled and also all verbs are allowed not just specific ones, also tried allowing GET, POST explicitely. Ensuring the application bool has "enable 32 bit applications" set to true. Ensuring the website runs with an account that has full permissions on the "rdbweb.exe".file and whole website (although we know it "read", "execute" should be enough). Ensuring the machine wide IIS setting for "ISAPI and CGI Restrictions" has the full path to "rdbweb.exe".allowed. Making sure we have the latest Windows Updates (for IIS6 we found knowledge base articles stating bugs that require hot fixes for IIS6, but nothing similar was found for IIS7). Changing the module from CGI to Fast CGI, not working also Now the only remaining possibility we have instigated is the following Microsoft Knowledge Base article:http://support.microsoft.com/kb/145661 - Which is about: CGI Error: The specified CGI application misbehaved by not returning a complete set of HTTP headers. The headers it did return are: the article suggests the following solution: Modify the source code for the CGI application header output. The following is an example of a correct header: print "HTTP/1.0 200 OK\n"; print "Content-Type: text/html\n\n\n"; Unfortunately we do not have the source to try this out, and I'm not sure anyway whether this is the issue we're having. Can you help me with this problem? Is there a way to make the application work without requiring POST request? Note that on the old IIS6 server the application is working just fine, and I could not find any special IIS configuration that I may want to try its equivalent on IIS7.

    Read the article

  • SSL certificates and types for securing your websites and applications

    - by Mit Naik
    Need to share few information regarding SSL certificates and there types, which SSL certificates are widely used etc. There are several SSL certificates available in the market today inorder to secure your domains, multiple subdomains, your applications and code too. Few of the details are mentioned below. CheapSSL certificates available today are Standard Rapidssl certificate, Thwate SSL 123 etc certificates which are basic level certificates. Most of these cheap SSL certificates are domain-validated only and don't provide the greatest trust for your customers. This means you shouldn't use cheap SSL certificates on e-commerce stores or other public-facing sites that require people to trust the site. EV certificates I found Geotrust Truebusinessid with EV certificate which is one of the cheapest certificate available in market today, you can also find Thwate, Versign EV version of certificates. Its designed to prevent phishing attacks better than normal SSL certificates. What makes an EV Certificate so special? An SSL Certificate Provider has to do some extensive validation to give you one including: Verifying that your organization is legally registered and active, Verifying the address and phone number of your organization, Verifying that your organization has exclusive right to use the domain specified in the EV Certificate, Verifying that the person ordering the certificate has been authorized by the organization, Verifying that your organization is not on any government blacklists. SSL WILDCARD CERTIFICATES, SSL Wildcard Certificates are big money-savers. An SSL Wildcard Certificate allows you to secure an unlimited number of first-level sub-domains on a single domain name. For example, if you need to secure the following websites: * www.yourdomain.com * secure.yourdomain.com * product.yourdomain.com * info.yourdomain.com * download.yourdomain.com * anything.yourdomain.com and all of these websites are hosted on the multiple server box, you can purchase and install one Wildcard certificate issued to *.yourdomain.com to secure all these sites. SAN CERTIFICATES, are interesting certificates and are helpfull if you want to secure multiple domains by generating single CSR and can install the same certificate on your additional sites without generating new CSRs for all the additional domains. CODE SIGNING CERTIFICATES, A code signing certificate is a file containing a digital signature that can be used to sign executables and scripts in order to verify your identity and ensure that your code has not been tampered with since it was signed. This helps your users to determine whether your software can be trusted. Scroll to the chart below to compare cheap code signing certificates. A code signing certificate allows you to sign code using a private and public key system similar to how an SSL certificate secures a website. When you request a code signing certificate, a public/private key pair is generated. The certificate authority will then issue a code signing certificate that contains the public key. A certificate for code signing needs to be signed by a trusted certificate authority so that the operating system knows that your identity has been validated. You could still use the code signing certificate to sign and distribute malicious software but you will be held legally accountable for it. You can sign many different types of code. The most common types include Windows applications such as .exe, .cab, .dll, .ocx, and .xpi files (using an Authenticode certificate), Apple applications (using an Apple code signing certificate), Microsoft Office VBA objects and macros (using a VBA code signing certificate), .jar files (using a Java code signing certificate), .air or .airi files (using an Adobe AIR certificate), and Windows Vista drivers and other kernel-mode software (using a Vista code certificate). In reality, a code signing certificate can sign almost all types of code as long as you convert the certificate to the correct format first. Also I found the below URL which provides you good suggestion regarding purchasing best SSL certificates for securing your site, as per the Financial institution, Bank, Hosting providers, ISP, Retail Merchants etc. Please vote and provide comments or any additional suggestions regarding SSL certificates.

    Read the article

  • FFSERVER - streaming an ASF video as Webm output

    - by Emmanuel Brunet
    I'm trying to stream an IP webcam ASF live stream to a ffserver to output a webm video format. The server starts successfully but the ffserver commands used to feed the ffserver fails and generates a core dump. Environment Debian 7.5 ffmpeg 2.2 Input stream $ ffprobe http://account:password@webcam/videostream.asf Input #0, asf, from 'http://admin:alpha1237@webcam/videostream.asf': Duration: N/A, start: 0.000000, bitrate: 32 kb/s Stream #0:0: Video: mjpeg (MJPG / 0x47504A4D), yuvj422p(pc), 640x480, 25 tbr, 1k tbn, 1k tbc Stream #0:1: Audio: adpcm_ima_wav ([17][0][0][0] / 0x0011), 8000 Hz, 1 channels, s16p, 32 kb/s ffserver configuration my ffserver configuration is : Port 8091 RTSPPort 554 BindAddress 192.168.1.62 MaxHTTPConnections 1000 MaxClients 100 MaxBandwidth 1000 CustomLog - <Feed webcam.ffm> File /tmp/webcam.ffm FileMaxSize 500M ACL allow localhost ACL allow 192.168.0.0 192.168.255.255 </Feed> <Stream webcam.webm> # Output stream URL definition Feed webcam.ffm # Feed from which to receive video Format webm # Audio settings AudioCodec vorbis AudioBitRate 64 # Audio bitrate # Video settings VideoCodec libvpx VideoSize 640x480 # Video resolution VideoFrameRate 25 # Video FPS AVOptionVideo flags +global_header # Parameters passed to encoder # (same as ffmpeg command-line parameters) AVOptionVideo cpu-used 0 AVOptionVideo qmin 10 AVOptionVideo qmax 42 AVOptionVideo quality good AVOptionAudio flags +global_header PreRoll 15 StartSendOnKey # VideoBitRate 32 # Video bitrate </Stream> <Stream status.html> Format status # Only allow local people to get the status ACL allow localhost ACL allow 192.168.0.0 192.168.255.255 </Stream> ffmpeg feed I run the following command that fails $ ffmpeg -i http://account:password@webcam/videostream.asf http://192.168.1.62:8091/webcam.ffm http://192.168.1.62:8091/webcam.ffm Input #0, asf, from 'http://account:password@webcam/videostream.asf': Duration: N/A, start: 0.000000, bitrate: 32 kb/s Stream #0:0: Video: mjpeg (MJPG / 0x47504A4D), yuvj422p(pc), 640x480, 25 tbr, 1k tbn, 1k tbc Stream #0:1: Audio: adpcm_ima_wav ([17][0][0][0] / 0x0011), 8000 Hz, mono, s16p, 32 kb/s [swscaler @ 0x36a80c0] deprecated pixel format used, make sure you did set range correctly Segmentation fault I tryed $ ffmpeg -i http://account:password@webcam/videostream.asf -pix_fmt yuv420p http://192.168.1.62:8091/webcam.ffm But it raises the same error. Thanks for your help Edit For an easy testing (I thought), I tried to publish the whole ASF stream as is, meaning connecting the ASF webcam output stream to the ffserver that outputs ASF format too. And thus with mirrored encoding so I changed the ffserver configuration to ... <Stream webcam.asf> Feed webcam.ffm Format asf VideoFrameRate 25 VideoSize 640X480 VideoBitRate 256 VideoBufferSize 1000 VideoGopSize 30 AudioBitRate 32 StartSendOnKey </Stream> ... And the output is now : Input #0, asf, from 'http://admin:alpha1237@webcam/videostream.asf': Duration: N/A, start: 0.000000, bitrate: 32 kb/s Stream #0:0: Video: mjpeg (MJPG / 0x47504A4D), yuvj422p(pc), 640x480, 1k tbr, 1k tbn, 1k tbc Stream #0:1: Audio: adpcm_ima_wav ([17][0][0][0] / 0x0011), 8000 Hz, mono, s16p, 32 kb/s [swscaler @ 0x3d620c0] deprecated pixel format used, make sure you did set range correctly Output #0, ffm, to 'http://192.168.1.62:8091/webcam.ffm': Metadata: creation_time : now encoder : Lavf55.40.100 Stream #0:0: Audio: wmav2, 22050 Hz, mono, fltp, 32 kb/s Metadata: encoder : Lavc55.64.100 wmav2 Stream #0:1: Video: msmpeg4v3 (msmpeg4), yuv420p, 640x480, q=2-31, 256 kb/s, 1k fps, 1000k tbn, 1k tbc Metadata: Stream mapping: Stream #0:1 -> #0:0 (adpcm_ima_wav -> wmav2) Stream #0:0 -> #0:1 (mjpeg -> msmpeg4) Press [q] to stop, [?] for help Segmentation fault I can't even forward the stream.

    Read the article

  • cannt run phpunit tests on bash ubuntu 11.10

    - by Mohamad Elbialy
    i'm working with ubuntu 11.10 as root on my local machine, i've installed xampp 1.7.7 and i'm a newbie to ubuntu, while following a tutorial on sitepoint(http://www.sitepoint.com/getting-started-with-pear/) on how to install pear to use PhpUnit, i didnt notice it then, but it seems that i installed or used an existing php version 5.3.6 in CL to do that, also the pear installation was built on this version, while xampp being installed,i now have two versions of php,xampp's 5.3.8 and the 5.3.6, anyway, what i want to do is to use the existing xampp php version and build pear on that, to make all my work through xampp.so my questions are: how to uninstall the php V5.3.6 and it's pear installation? how to link the CL with the php ver. of xampp? how to build the next pear installation on the php ver. of xampp? i want all my web dev. work through xampp, is there anything else i need to unistall, to avoid this confusion? 4. i did the following in attampet to solve the problem: i wrote this in bash: gedit ~/.bashrc i added that to the end of ~/.bashrc file in attempt to change environment path: export PATH=/opt/lampp/bin:$PATH export PATH=/opt/lampp/lib/php:$PATH export PATH=/opt/lampp/lib/php/PHPUnit/pearcmd.php:$PATH i checked the php and pear version using 'php -v' and 'pear list' i got an ouput of: PHP 5.3.8 (cli) (built: Sep 19 2011 13:29:27) Copyright (c) 1997-2011 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2011 Zend Technologies and for pear: Installed packages, channel pear.php.net: ========================================= Package Version State Archive_Tar 1.3.9 stable Console_Getopt 1.3.1 stable PEAR 1.9.4 stable PHPUnit 1.3.2 stable Structures_Graph 1.0.4 stable XML_Util 1.2.1 stable when i run: 'phpunit MessageTest.php': i get PHP Warning: require_once(PHP/CodeCoverage/Filter.php): failed to open stream: No such file or directory in /usr/bin/phpunit on line 38 Warning: require_once(PHP/CodeCoverage/Filter.php): failed to open stream: No such file or directory in /usr/bin/phpunit on line 38 PHP Fatal error: require_once(): Failed opening required 'PHP/CodeCoverage/Filter.php' (include_path='.:/php/includes:/opt/lampp/lib/php:/opt/lampp/bin:/opt/lampp/lib/php/PEAR') in /usr/bin/phpunit on line 38 5.i ran the following commands as reported in other questions as a solution to that error: sudo apt-get remove phpunit sudo pear channel-discover pear.phpunit.de sudo pear channel-discover pear.symfony-project.com sudo pear channel-discover components.ez.no sudo pear update-channels sudo pear upgrade-all sudo pear install --alldeps phpunit/PHPUnit sudo apt-get install phpunit and updated include path of php.ini to be: include_path = ".:/php/includes:/opt/lampp/lib/php:/opt/lampp/bin:/opt/lampp/lib/php/PEAR" the php file MessageTest.php: <?php require 'PHPUnit/Autoload.php'; $path = '/opt/lampp/lib/php/PEAR'; set_include_path(get_include_path() . PATH_SEPARATOR . $path); require_once 'PHPUnit/Framework/TestCase.php'; require_once 'Message/Controller/MessageController.php'; class MessageTest extends PHPUnit_Framework_TestCase{ private $message; public function setUp() { $this->message = new MessageController(); } public function tearDown() { } public function testRepeat(){ $yell = "Hello, Any One Out There?"; $this->message->repeat($yell); //sending a request $returnedMessage = $this->message->repeat($yell);//get a response $this->assertEquals($returnedMessage, $yell); } } ?> MessageController class from MessageController.php that i'm trying to test <?php class MessageController { public function actionHelloWorld() { echo 'helloWorld'; } public function repeat($inputString){ return $inputString; } } $msg = new MessageController; ?> I'm not using any PHP framework, i just made the files and classes sounds like it that's all. and still i get the same error: PHP Warning: require_once(PHP/CodeCoverage/Filter.php): failed to open stream: No such file or directory in /usr/bin/phpunit on line Warning: require_once(PHP/CodeCoverage/Filter.php): failed to open stream: No such file or directory in /usr/bin/phpunit on line 38 PHP Fatal error: require_once(): Failed opening required 'PHP/CodeCoverage/Filter.php' (include_path='.:/php/includes:/opt/lampp/lib/php:/opt/lampp/bin:/opt/lampp/lib/php/PEAR') in /usr/bin/phpunit on line 38 sure, i'm getting demanding here, i've wasted a lot of time and got really frustrated over this, hope you guys dont get bored reading through my questions, i appreciate your help thanks in advance, Mohamad elbialy

    Read the article

  • svnstat script

    - by Kyle Hodgson
    So I'm building out a shell script to check out all of our relevant svn repositories for analysis in svnstat. I've gotten all of this to work manually, now I'm writing up a bash script in cygwin on my Vista laptop, as I intend to move this to a Linux server at some point. Edit: I gave up on this and wrote a simple .bat script. I'll figure out the Linux deployment some other way. Edit: added the sleep 30 and svn log commands. I can tell now, with the svn log command, that it's not getting to the svn log ... this time, it did Applications, and ran the log, and then check out Database, and froze. I'll put the sleep 30 before and after the log this time. co2.sh #!/bin/bash function checkout { mkdir $1 svn checkout svn://dev-server/$1 $1 svn log --verbose --xml >> svn.log $1 sleep 30 } cd /cygdrive/c/Users/My\ User/Documents/Repos/wc checkout Applications checkout Database checkout WebServer/www.mysite.com checkout WebServer/anotherhost.mysite.com checkout WebServer/AnotherApp checkout WebServer/thirdhost.mysite.com checkout WebServer/fourthhost.mysite.com checkout WebServer/WebServices It works, for the most part - but for some reason it has a tendency to stop working after a few repositories, usually right after finishing a repository before going to the next one. When it fails, it will not recover on its own. I've tried commenting out the svn line, it goes in and creates all the directories just fine when I do that - so its not that. I'm looking for direction as well as direct advice. Cygwin has been very stable for me, but I did start using the native rxvt instead of "bash in a cmd.exe window" recently. I don't think that's the problem, as I've left top on remote systems running all night and rxvt didn't seem to mind. Also I haven't done any bash scripting in cygwin so I suppose this might not be recommended; though I can't see why not. I don't want all of WebServer, hence me only checking out certain folders like that. What I suspect is that something is hanging up the svn checkout. Any ideas here? Edit: this time when I hit ctrl+z to cancel out, I forgot I was on Windows and typed ps to see if the job was still running; and as you can see there are lots of svn processes hanging around... strange. Kyle Hodgson@KyleHodgson-PC ~/winUser/Documents/Repos $ Kyle Hodgson@KyleHodgson-PC ~/winUser/Documents/Repos $ jobs [1]- Stopped bash co2.sh [2]+ Stopped ./co2.sh Kyle Hodgson@KyleHodgson-PC ~/winUser/Documents/Repos $ kill %1 [1]- Stopped bash co2.sh Kyle Hodgson@KyleHodgson-PC ~/winUser/Documents/Repos $ [1]- Terminated bash co2.sh Kyle Hodgson@KyleHodgson-PC ~/winUser/Documents/Repos $ Kyle Hodgson@KyleHodgson-PC ~/winUser/Documents/Repos $ Kyle Hodgson@KyleHodgson-PC ~/winUser/Documents/Repos $ ps PID PPID PGID WINPID TTY UID STIME COMMAND 7872 1 7872 2340 0 1000 Jun 29 /usr/bin/svn 7752 1 6140 7828 1 1000 Jun 29 /usr/bin/svn 6192 1 5044 2192 1 1000 Jun 30 /usr/bin/svn 7292 1 7452 1796 1 1000 Jun 30 /usr/bin/svn 6236 1 7304 7468 2 1000 Jul 2 /usr/bin/svn 1564 1 5032 7144 2 1000 Jul 2 /usr/bin/svn 9072 1 3960 6276 3 1000 Jul 3 /usr/bin/svn 5876 1 5876 5876 con 1000 11:22:10 /usr/bin/rxvt 924 5876 924 10192 4 1000 11:22:10 /usr/bin/bash 7212 1 7332 5584 4 1000 13:17:54 /usr/bin/svn 9412 1 5480 8840 4 1000 15:38:16 /usr/bin/svn S 8128 924 8128 9452 4 1000 17:38:05 /usr/bin/bash 9132 8128 8128 8172 4 1000 17:43:25 /usr/bin/svn 3512 1 3512 3512 con 1000 17:43:50 /usr/bin/rxvt I 10200 3512 10200 6616 5 1000 17:43:51 /usr/bin/bash 9732 1 9732 9732 con 1000 17:45:55 /usr/bin/rxvt 3148 9732 3148 8976 6 1000 17:45:55 /usr/bin/bash 5856 3148 5856 876 6 1000 17:51:00 /usr/bin/vim 7736 924 7736 8036 4 1000 17:53:26 /usr/bin/ps Kyle Hodgson@KyleHodgson-PC ~/winUser/Documents/Repos $ jobs [2]+ Stopped ./co2.sh Kyle Hodgson@KyleHodgson-PC ~/winUser/Documents/Repos $ Here's an strace on the PID of the hung svn program, it's been like this for hours. Looks like its just doing nothing. I keep suspecting that some interruption on the server is causing this; does svn have a locking mechanism I'm not aware of? Kyle Hodgson@KyleHodgson-PC ~/winUser/Documents/Repos $ strace -p 7304 ********************************************** Program name: C:\cygwin\bin\svn.exe (pid 7304, ppid 6408) App version: 1005.25, api: 0.156 DLL version: 1005.25, api: 0.156 DLL build: 2008-06-12 19:34 OS version: Windows NT-6.0 Heap size: 402653184 Date/Time: 2009-07-06 18:20:11 **********************************************

    Read the article

  • What are the pitfalls of hardlinked files on my desktop PC?

    - by MountainX
    All the identical-content files on my PC are now hardlinked. (My data is completely de-duplicated. It is a consequence of the way I copied my data from my old computer.) What pitfalls do I need to be aware of now that certain actions on one file could silently affect a number of other files? I know that deleting the file I'm working on is not a problem (assuming I deleted it on purpose). It doesn't affect any of the other hardlinked files and I don't see that the delete action would lead to unexpected side effects. Moving or renaming the file is not a problem. I don't see any unexpected consequences. I don't think copying hardlinked files is a problem, but I'm not as confident about any unexpected consequences in this regard. What I have seen is that making a copy (to the same disk) of a hardlinked file with cp keeps the copy hardlinked (i.e., inode number doesn't change in the copy). Copying to another filesystem obviously breaks the hardlink. (I guess one pitfall is forgetting this fact, given that my PC has 3 hard disks.) Changing permissions does affect all linked files. So far this has proven handy. (I made a large number of the hardlinked files read-only.) None of the operations above seem to produce any major unexpected consequences. However, as was pointed out to me by Daniel Beck in a comment, editing or modifying a file can sometimes be a problem. It depends on the tool and maybe the type of edit. (For example, editing small text files using sed seems to always break the link while using nano doesn't.) This introduces the chance that editing one file could affect all the hardlinked files (i.e., alter the original inode). My proposed solution to this is to make all hardlinked files read-only (and that is already mostly the case). If I can't do that for some files, I will unlink those particular files. Is there any problem with this read-only approach? I'm assuming that if I go to edit a file and find it to be read-only, I'll remember to unlink that filename while making it writable. So one pitfall might be forgetting this rule. In that case, I'll have to rely on my backups. Am I correct in the above statements? And what else do I need to know? BTW, I'm running Kubuntu 12.04. I'm also using btrfs. (I have 2 SSD's and 1 HDD in the PC. I will also be adding an external USB HDD. I'm also connected to a network and I mount some NFS shares. I don't assume any of these last bits are relevant to the question, but I'm adding them just in case.) BTW, since I have more than one drive (with separate file systems), to unlink any file all I have to do is copy it to another drive, then move it back. However, using sed also works (in my testing). Here's my script: sed -i 's/\(.\)/\1/' file1 Surprisingly, this even unlinks zero byte files. In my testing it also appears to work on non-text files without any special options. (But I understand that the --binary option might be needed on Windows, MS-DOS and Cygwin.) However, copying to another disk and moving back may be the best way to unlink. For my use-case, unlink command doesn't really "unlink", rather it "removes".

    Read the article

  • rpm rollback ignoring rpms - no error output

    - by John H
    Issue rpm rollback is not working with a set of repackaged rpms created in the last couple days, but does work with more recent ones. [root@host1 repackage]# ls -l zsh-4.2.6-* -rw-r--r-- 1 root root 1788283 Apr 10 2011 zsh-4.2.6-3.el5.i386.rpm -rw-r--r-- 1 root root 1788691 Aug 18 04:38 zsh-4.2.6-5.el5.i386.rpm [root@host1 repackage]# rpm -q zsh zsh-4.2.6-6.el5 [root@host1 repackage]# rpm --test -Uvh --rollback 'Aug 18 01:00' [root@host1 repackage]# rpm -e zsh [root@host1 repackage]# [root@host1 repackage]# ls -l zsh* -rw-r--r-- 1 root root 1788283 Apr 10 2011 zsh-4.2.6-3.el5.i386.rpm -rw-r--r-- 1 root root 1788691 Aug 18 04:38 zsh-4.2.6-5.el5.i386.rpm -rw-r--r-- 1 root root 1789064 Aug 20 09:06 zsh-4.2.6-6.el5.i386.rpm [root@host1 repackage]# cp zsh-4.2.6-6.el5.i386.rpm /tmp [root@host1 repackage]# rpm --test -Uvh --rollback 'Aug 18 01:00' Rollback packages (+1/-0) to Mon Aug 20 09:02:16 2012 (0x50323558): Preparing... ########################################### [100%] Cleaning up repackaged packages: Removing /var/spool/repackage/zsh-4.2.6-6.el5.i386.rpm: [root@host1 repackage]# ls -l zsh-4.2.6-* -rw-r--r-- 1 root root 1788283 Apr 10 2011 zsh-4.2.6-3.el5.i386.rpm -rw-r--r-- 1 root root 1788691 Aug 18 04:38 zsh-4.2.6-5.el5.i386.rpm [root@host1 repackage]# cp /tmp/zsh-4.2.6-6.el5.i386.rpm . [root@host1 repackage]# rpm -Uvh --rollback 'Aug 18 01:00' Rollback packages (+1/-0) to Mon Aug 20 09:06:05 2012 (0x5032363d): Preparing... ########################################### [100%] 1:zsh ########################################### [ 50%] Cleaning up repackaged packages: Removing /var/spool/repackage/zsh-4.2.6-6.el5.i386.rpm: [root@host1 repackage]# rpm --test -Uvh --rollback 'April 9' [root@host1 repackage]# Now, if I run my test commands with -Uvvh I get debug messages to stderror which shows me that rpm reads each of the rpm files in /var/spool/repackage. The only interesting bit is the "expected size" but after searching, the expected size should be different, as it records the files as they are on the filesystem. D: opening db environment /var/lib/rpm/Packages joinenv D: opening db index /var/lib/rpm/Packages rdonly mode=0x0 D: locked db index /var/lib/rpm/Packages D: opening db index /var/lib/rpm/Installtid rdonly mode=0x0 D: opening db index /var/lib/rpm/Pubkeys rdonly mode=0x0 D: read h# 769 Header sanity check: OK D: ========== DSA pubkey id 53268101 37017186 (h#769) D: read h# 32 Header V3 DSA signature: OK, key ID 37017186 D: read h# 40 Header V3 DSA signature: OK, key ID 37017186 ... D: read h# 1753 Header V3 DSA signature: OK, key ID 37017186 D: Expected size: 3628918 = lead(96)+sigs(344)+pad(0)+data(3628478) D: Actual size: 3583695 D: /var/spool/repackage/Deployment_Guide-en-US-5.2-11.noarch.rpm: Header V3 DSA signature: OK, key ID 37017186 D: Expected size: 1100789 = lead(96)+sigs(344)+pad(0)+data(1100349) D: Actual size: 1109281 D: /var/spool/repackage/NetworkManager-0.7.0-10.el5_5.2.i386.rpm: Header V3 DSA signature: OK, key ID 37017186 D: Expected size: 1098167 = lead(96)+sigs(344)+pad(0)+data(1097727) D: Actual size: 1106179 D: /var/spool/repackage/NetworkManager-0.7.0-9.el5.i386.rpm: Header V3 DSA signature: OK, key ID 37017186 D: Expected size: 84351 = lead(96)+sigs(344)+pad(0)+data(83911) D: Actual size: 85378 ... D: Expected size: 1788276 = lead(96)+sigs(344)+pad(0)+data(1787836) D: Actual size: 1788691 D: /var/spool/repackage/zsh-4.2.6-5.el5.i386.rpm: Header V3 DSA signature: OK, key ID 37017186 D: --- erase h#1758 D: closed db index /var/lib/rpm/Pubkeys D: closed db index /var/lib/rpm/Installtid D: closed db index /var/lib/rpm/Packages D: closed db environment /var/lib/rpm/Packages D: May free Score board((nil)) I am able to copy these rpms out of the repackage directory and if I run them through cpio, extract the files. I also tried backing up and rebuilding the rpm database - no change. System Information: RHEL 5.8 rpm 4.4.2.3 /etc/yum.conf tsflags=repackage /etc/rpm/macros %_repackage_all_erasures 1

    Read the article

  • Setting up RADIUS + LDAP for WPA2 on Ubuntu

    - by Morten Siebuhr
    I'm setting up a wireless network for ~150 users. In short, I'm looking for a guide to set RADIUS server to authenticate WPA2 against a LDAP. On Ubuntu. I got a working LDAP, but as it is not in production use, it can very easily be adapted to whatever changes this project may require. I've been looking at FreeRADIUS, but any RADIUS server will do. We got a separate physical network just for WiFi, so not too many worries about security on that front. Our AP's are HP's low end enterprise stuff - they seem to support whatever you can think of. All Ubuntu Server, baby! And the bad news: I now somebody less knowledgeable than me will eventually take over administration, so the setup has to be as "trivial" as possible. So far, our setup is based only on software from the Ubuntu repositories, with exception of our LDAP administration web application and a few small special scripts. So no "fetch package X, untar, ./configure"-things if avoidable. UPDATE 2009-08-18: While I found several useful resources, there is one serious obstacle: Ignoring EAP-Type/tls because we do not have OpenSSL support. Ignoring EAP-Type/ttls because we do not have OpenSSL support. Ignoring EAP-Type/peap because we do not have OpenSSL support. Basically the Ubuntu version of FreeRADIUS does not support SSL (bug 183840), which makes all the secure EAP-types useless. Bummer. But some useful documentation for anybody interested: http://vuksan.com/linux/dot1x/802-1x-LDAP.html http://tldp.org/HOWTO/html_single/8021X-HOWTO/#confradius UPDATE 2009-08-19: I ended up compiling my own FreeRADIUS package yesterday evening - there's a really good recipe at http://www.linuxinsight.com/building-debian-freeradius-package-with-eap-tls-ttls-peap-support.html (See the comments to the post for updated instructions). I got a certificate from http://CACert.org (you should probably get a "real" cert if possible) Then I followed the instructions at http://vuksan.com/linux/dot1x/802-1x-LDAP.html. This links to http://tldp.org/HOWTO/html_single/8021X-HOWTO/, which is a very worthwhile read if you want to know how WiFi security works. UPDATE 2009-08-27: After following the above guide, I've managed to get FreeRADIUS to talk to LDAP: I've created a test user in LDAP, with the password mr2Yx36M - this gives an LDAP entry roughly of: uid: testuser sambaLMPassword: CF3D6F8A92967E0FE72C57EF50F76A05 sambaNTPassword: DA44187ECA97B7C14A22F29F52BEBD90 userPassword: {SSHA}Z0SwaKO5tuGxgxtceRDjiDGFy6bRL6ja When using radtest, I can connect fine: > radtest testuser "mr2Yx36N" sbhr.dk 0 radius-private-password Sending Access-Request of id 215 to 130.225.235.6 port 1812 User-Name = "msiebuhr" User-Password = "mr2Yx36N" NAS-IP-Address = 127.0.1.1 NAS-Port = 0 rad_recv: Access-Accept packet from host 130.225.235.6 port 1812, id=215, length=20 > But when I try through the AP, it doesn't fly - while it does confirm that it figures out the NT and LM passwords: ... rlm_ldap: sambaNTPassword -> NT-Password == 0x4441343431383745434139374237433134413232463239463532424542443930 rlm_ldap: sambaLMPassword -> LM-Password == 0x4346334436463841393239363745304645373243353745463530463736413035 [ldap] looking for reply items in directory... WARNING: No "known good" password was found in LDAP. Are you sure that the user is configured correctly? [ldap] user testuser authorized to use remote access rlm_ldap: ldap_release_conn: Release Id: 0 ++[ldap] returns ok ++[expiration] returns noop ++[logintime] returns noop [pap] Normalizing NT-Password from hex encoding [pap] Normalizing LM-Password from hex encoding ... It is clear that the NT and LM passwords differ from the above, yet the message [ldap] user testuser authorized to use remote access - and the user is later rejected...

    Read the article

  • Determining cause of high NFS/IO utilization without iotop

    - by Matt
    I have a server that is doing an NFSv4 export for user's home directories. There are roughly 25 users (mostly developers/analysts) and about 40 servers mounting the home directory export. Performance is miserable, with users often seeing multi-second lags for simple commands (like ls, or writing a small text file). Sometimes the home directory mount completely hangs for minutes, with users getting "permission denied" errors. The hardware is a Dell R510 with dual E5620 CPUs and 8 GB RAM. There are eight 15k 2.5” 600 GB drives (Seagate ST3600057SS) configured in hardware RAID-6 with a single hot spare. RAID controller is a Dell PERC H700 w/512MB cache (Linux sees this as a LSI MegaSAS 9260). OS is CentOS 5.6, home directory partition is ext3, with options “rw,data=journal,usrquota”. I have the HW RAID configured to present two virtual disks to the OS: /dev/sda for the OS (boot, root and swap partitions), and /dev/sdb for the home directories. What I find curious, and suspicious, is that the sda device often has very high utilization, even though it only contains the OS. I would expect this virtual drive to be idle almost all the time. The system is not swapping, according to "free" and "vmstat". Why would there be major load on this device? Here is a 30-second snapshot from iostat: Time: 09:37:28 AM Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 44.09 0.03 107.76 0.13 607.40 11.27 0.89 8.27 7.27 78.35 sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda2 0.00 44.09 0.03 107.76 0.13 607.40 11.27 0.89 8.27 7.27 78.35 sdb 0.00 2616.53 0.67 157.88 2.80 11098.83 140.04 8.57 54.08 4.21 66.68 sdb1 0.00 2616.53 0.67 157.88 2.80 11098.83 140.04 8.57 54.08 4.21 66.68 dm-0 0.00 0.00 0.03 151.82 0.13 607.26 8.00 1.25 8.23 5.16 78.35 dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-2 0.00 0.00 0.67 2774.84 2.80 11099.37 8.00 474.30 170.89 0.24 66.84 dm-3 0.00 0.00 0.67 2774.84 2.80 11099.37 8.00 474.30 170.89 0.24 66.84 Looks like iotop is the ideal tool to use to sniff out these kinds of issues. But I'm on CentOS 5.6, which doesn't have a new enough kernel to support that program. I looked at Determining which process is causing heavy disk I/O?, and besides iotop, one of the suggestions said to do a "echo 1 /proc/sys/vm/block_dump". I did that (after directing kernel messages to tempfs). In about 13 minutes I had about 700k reads or writes, roughly half from kjournald and the other half from nfsd: # egrep " kernel: .*(READ|WRITE)" messages | wc -l 768439 # egrep " kernel: kjournald.*(READ|WRITE)" messages | wc -l 403615 # egrep " kernel: nfsd.*(READ|WRITE)" messages | wc -l 314028 For what it's worth, for the last hour, utilization has constantly been over 90% for the home directory drive. My 30-second iostat keeps showing output like this: Time: 09:36:30 PM Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 6.46 0.20 11.33 0.80 71.71 12.58 0.24 20.53 14.37 16.56 sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda2 0.00 6.46 0.20 11.33 0.80 71.71 12.58 0.24 20.53 14.37 16.56 sdb 137.29 7.00 549.92 3.80 22817.19 43.19 82.57 3.02 5.45 1.74 96.32 sdb1 137.29 7.00 549.92 3.80 22817.19 43.19 82.57 3.02 5.45 1.74 96.32 dm-0 0.00 0.00 0.20 17.76 0.80 71.04 8.00 0.38 21.21 9.22 16.57 dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-2 0.00 0.00 687.47 10.80 22817.19 43.19 65.48 4.62 6.61 1.43 99.81 dm-3 0.00 0.00 687.47 10.80 22817.19 43.19 65.48 4.62 6.61 1.43 99.82

    Read the article

  • apache+mod_wsgi configuration for django project(s) on a quad core

    - by Stefano
    I've been experiment quite some time with a "typical" django setting upon nginx+apache2+mod_wsgi+memcached(+postgresql) (reading the doc and some questions on SO and SF, see comments) Since I'm still unsatisfied with the behavior (definitely because of some bad misconfiguration on my part) I would like to know what a good configuration would look like with these hypotesis: Quad-Core Xeon 2.8GHz 8 gigs memory several django projects (anything special related to this?) These are excerpts form my current confs: apache2 SetEnv VHOST null #WSGIPythonOptimize 2 <VirtualHost *:8082> ServerName subdomain.domain.com ServerAlias www.domain.com SetEnv VHOST subdomain.domain AddDefaultCharset UTF-8 ServerSignature Off LogFormat "%{X-Real-IP}i %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-agent}i\"" custom ErrorLog /home/project1/var/logs/apache_error.log CustomLog /home/project1/var/logs/apache_access.log custom AllowEncodedSlashes On WSGIDaemonProcess subdomain.domain user=www-data group=www-data threads=25 WSGIScriptAlias / /home/project1/project/wsgi.py WSGIProcessGroup %{ENV:VHOST} </VirtualHost> wsgi.py import os import sys # setting all the right paths.... _realpath = os.path.realpath(os.path.dirname(__file__)) _public_html = os.path.normpath(os.path.join(_realpath, '../')) sys.path.append(_realpath) sys.path.append(os.path.normpath(os.path.join(_realpath, 'apps'))) sys.path.append(os.path.normpath(_public_html)) sys.path.append(os.path.normpath(os.path.join(_public_html, 'libs'))) sys.path.append(os.path.normpath(os.path.join(_public_html, 'django'))) os.environ['DJANGO_SETTINGS_MODULE'] = 'settings' import django.core.handlers.wsgi _application = django.core.handlers.wsgi.WSGIHandler() def application(environ, start_response): """ Launches django passing over some environment (domain name) settings """ application_group = environ['mod_wsgi.application_group'] """ wsgi application group is required. It's also used to generate the HOST.DOMAIN.TLD:PORT parameters to pass over """ assert application_group fields = application_group.replace('|', '').split(':') server_name = fields[0] os.environ['WSGI_APPLICATION_GROUP'] = application_group os.environ['WSGI_SERVER_NAME'] = server_name if len(fields) > 1 : os.environ['WSGI_PORT'] = fields[1] splitted = server_name.rsplit('.', 2) assert splitted >= 2 splited.reverse() if len(splitted) > 0 : os.environ['WSGI_TLD'] = splitted[0] if len(splitted) > 1 : os.environ['WSGI_DOMAIN'] = splitted[1] if len(splitted) > 2 : os.environ['WSGI_HOST'] = splitted[2] return _application(environ, start_response)` folder structure in case it matters (slightly shortened actually) /home/www-data/projectN/var/logs /project (contains manage.py, wsgi.py, settings.py) /project/apps (all the project ups are here) /django /libs Please forgive me in advance if I overlooked something obvious. My main question is about the apache2 wsgi settings. Are those fine? Is 25 threads an /ok/ number with a quad core for one only django project? Is it still ok with several django projects on different virtual hosts? Should I specify 'process'? Any other directive which I should add? Is there anything really bad in the wsgi.py file? I've been reading about potential issues with the standard wsgi.py file, should I switch to that? Or.. should this conf just be running fine, and I should look for issues somewhere else? So, what do I mean by "unsatisfied": well, I often get quite high CPU WAIT; but what is worse, is that relatively often apache2 gets stuck. It just does not answer anymore, and has to be restarted. I have setup a monit to take care of that, but it ain't a real solution. I have been wondering if it's an issue with the database access (postgresql) under heavy load, but even if it was, why would the apache2 processes get stuck? Beside these two issues, performance is overall great. I even tried New Relic and got very good average results.

    Read the article

  • Diagnosing Solaris 8 server memory and swap space usage

    - by datSilencer
    Hello everyone. Essentially, my question is related to memory allocation for Solaris virtual machines. I am running a couple of old Sun ONE 6 Java web servers on two Solaris 8 virtual machines. I see that there's a reasonable amount of swap space being used, but I'm not exactly sure if this could indicate a need to add more RAM to these machines. At service peak hours (mornings usually), the response time of the web application these servers host jumps up to at most 11 seconds (somewhat detrimental for a relatively simple web page loading action). Average response time at non peak times is about 5 seconds. What would you be able to infer about the RAM usage for these machines from the ouput below? Is this information reasonably sufficient? Or would I need to run some other commands to rule out server memory starvation? Finally, since there is a Java application at the core of the setup, I've also thought about: 1) Trace the heap's Object allocation to detect potential memory leaks. 2) Do some performance profiling to see if this instead related to networking delays. I mention this since the application talks with a single Oracle Database, but I would doubt this to be the case since they're pretty close from a network segmentation perspective. I appreciate any kind of insight and feedback you could provide. Thanks for your time and help. Server 1: 40 processes: 38 sleeping, 1 zombie, 1 on cpu CPU states: 99.1% idle, 0.4% user, 0.4% kernel, 0.0% iowait, 0.0% swap Memory: 2048M real, 295M free, 865M swap in use, 3788M swap free PID USERNAME THR PRI NICE SIZE RES STATE TIME CPU COMMAND 12676 webservd 112 29 10 616M 242M sleep 103:37 0.48% webservd 18317 root 1 59 0 23M 19M sleep 67:24 0.08% perl 9479 support 1 59 0 6696K 2448K cpu/1 0:11 0.05% top 8012 root 10 59 0 34M 704K sleep 80:54 0.04% java 1881 root 33 29 10 110M 13M sleep 33:03 0.02% webservd 7808 root 1 59 0 83M 67M sleep 7:59 0.00% perl 1461 root 20 59 0 5328K 1392K sleep 6:49 0.00% syslogd 1691 root 2 59 0 27M 680K sleep 4:22 0.00% webservd 24386 root 1 59 0 15M 11M sleep 2:50 0.00% perl 23259 root 1 59 0 11M 4240K sleep 2:42 0.00% perl 24718 root 1 59 0 11M 5464K sleep 2:29 0.00% perl 22810 root 1 59 0 19M 11M sleep 2:21 0.00% perl 24451 root 1 53 2 11M 3800K sleep 2:18 0.00% perl 18501 root 1 56 1 11M 3960K sleep 2:18 0.00% perl 14450 root 1 56 1 15M 6920K sleep 1:49 0.00% perl Server 2 42 processes: 40 sleeping, 1 zombie, 1 on cpu CPU states: 98.8% idle, 0.4% user, 0.8% kernel, 0.0% iowait, 0.0% swap Memory: 1024M real, 31M free, 554M swap in use, 3696M swap free PID USERNAME THR PRI NICE SIZE RES STATE TIME CPU COMMAND 5607 webservd 74 29 10 284M 173M sleep 20:14 0.21% webservd 15919 support 1 59 0 4056K 2520K cpu/1 0:08 0.09% top 13138 root 10 59 0 34M 1952K sleep 210:51 0.08% java 13753 root 1 59 0 22M 12M sleep 170:15 0.07% perl 22979 root 33 29 10 112M 7864K sleep 85:07 0.04% webservd 22930 root 1 59 0 3424K 1552K sleep 17:47 0.01% xntpd 22978 root 2 59 0 27M 2296K sleep 10:49 0.00% webservd 13571 root 1 59 0 9400K 5112K sleep 5:52 0.00% perl 5606 root 2 29 10 29M 9056K sleep 0:36 0.00% webservd 15910 support 1 59 0 9128K 2616K sleep 0:00 0.00% sshd 13106 root 1 59 0 82M 3520K sleep 7:47 0.00% perl 13547 root 1 59 0 12M 5528K sleep 6:38 0.00% perl 13518 root 1 59 0 9336K 3792K sleep 6:24 0.00% perl 13399 root 1 56 1 8072K 3616K sleep 5:18 0.00% perl 13557 root 1 53 2 8248K 3624K sleep 5:12 0.00% perl

    Read the article

  • Troubleshooting PHP email sending?

    - by darkAsPitch
    I created a website that occasionally emails users when they register/change their password/etc. Every other person however cannot or does not receive the emails. They are telling me that they are not even hitting their spam folders. I don't know a ton about MX records or email sending, but when I "Edit DNS Zone" for this domain in particular there is 1 MX record listed there. How do you go about troubleshooting botched PHP mail actions? UPDATE: Here is my super-simple php mailing code: $subject = "Subject Here"; $message = "Emails Message"; $to = $verified_user_data["email_address"]; $headers = "From: [email protected]\r\n" . "Reply-To: [email protected]\r\n" . "X-Mailer: PHP/" . phpversion(); //returns true on success, false on failure $email_result = mail($to, $subject, $message, $headers); re: "are you saying that some do and some do not?" @ Jacob Yes, basically. I send the emails containing the user's login username/password using similar code above. And I sell to fairly tech-savvy people. About 50% of the time, my customers claim they cannot find their welcome emails in their inbox OR in their spam box. It's as if it never arrived. I have the largest problem with Yahoo email addresses accepting my emails or so it seems. re: "The MX record at your end doesn't factor in, although the SPF record (or lack of it) will. How much access and control do you have on the server itself?" @ John Gardeniers I rent a dedicated server from Codero. Running CentOS 5, WHM + cPanel. I have full root access to the entire thing. Don't know much about MX records and/or SPF records. I just want the PHP mail function to work. It doesn't say much about that on the php mail function's help page. re: "What are you using for the SMTP server?" @ JonLim No idea. I use the code above when I need to fire off an email to a loyal customer, and that's it. Do I need to be worrying about SMTP servers? re: "Could be many, many things. Can you describe how you're sending mail in your code? i.e. are you relaying off of another mail server somewhere, using the local sendmail or postfix? Any consistency in domains that can/cannot receive email? Do you have a PTR record setup from the IP address that you're sending mail out as? What about SPF records?" @ gravyface I just described my simple code above! I believe I have been having the most trouble with Yahoo domains, however "independent" domains (probably running spamassasin) ex. [email protected] as opposed to [email protected] seem to give a lot of trouble as well. I do not know if I have a PTR record setup from the IP address I'm sending my mail from. It's probably the same IP address that I setup my domain on, because I didn't do anything extra special. No idea about SPF records either, where can I go to create one? Side Note: It's a crying shame what havoc the spammers have brought upon our beloved email system.

    Read the article

  • Users suddenly missing write permissions to the root drive c within an active directory domain

    - by Kevin
    I'm managing an active directory single domain environment on some Windows Server 2008, Windows Server 2008 R2 and Windows Server 2012 machines. Since a few weeks I got a strange issue. Some users (not all!) report that they cannot any longer save, copy or write files to the root drive c, whether on their clients (vista, win 7) nor via remote desktop connection on a Windows Server 2008 machine. Even running programs that require direct write permissions to the root drive without administrator permissions fail to do so since then. The affected users have local administrator permissions. The question I'm facing now is: What caused this change of system behavior? Why did this happen? I didn't find out yet. What was the last thing I did before it happened? The last action that was made before it happened was the rollout of a GPO containing network drive mappings for the users depending on their security group membership. All network drives are located on a linux server with samba enabled. We did not change any UAC settings, and they have always been activated. However I can't imagine that rolling out this GPO caused the problem. Has anybody faced an issue like that? Just in case: I know that it is for a specific reason that an user without administrative privileges is prevented from writing to the root drive since windows vista and the implementation of UAC. I don't think that those users should be able to write to drive c, but I try to figure out why this is happening and a few weeks ago this was still working. I also know that a user who is a member of the local administrators group does not execute anything with administrator permissions per default unless he or she executes a program with this permissions. What did I do yet? I checked the permissions of the affected programs, the affected clients/server. Didn't find something special. I checked ALL of our GPOs if there exist any restrictions that could prevent the affected users from writing to the root drive. Did not find any settings. I checked the UAC settings of the affected users and compared those to other users that still can write to the root drive. Everything similar. I googled though the internet and tried to find someone who had a similar problem. Did not find one. Has anybody an idea? Thank you very much. Edit: The GPO that was rolled out does the following (Please excuse if the settings are not named exactly like that, I translated the settings into english): **Windows Settings -- Network Drive Mappings -- Drive N: -- General:** Action: Replace **Properties:** Letter: N Location: \\path-to-drive\drivename Re-Establish connection: deactivated Label as: Name_of_the_Share Use first available Option: deactivated **Windows Settings -- Network Drive Mappings -- Drive N: -- Public: Options:** On error don't process any further elements for this extension: no Run as the logged in user: no remove element if it is not applied anymore: no Only apply once: no **Securitygroup:** Attribute -- Value bool -- AND not -- 0 name -- domain\groupname sid -- sid-of-the-group userContext -- 1 primaryGroup -- 0 localGroup -- 0 **Securitygroup:** Attribute -- Value bool -- OR not -- 0 name -- domain\another-groupname sid -- sid-of-the-group userContext -- 1 primaryGroup -- 0 localGroup -- 0 Edit: The Error-Message of an affected users says the following: Due to an unexpected error you can't copy the file. Error-Code 0x80070522: The client is missing a required permission. The command icacls C: shows the following: NT-AUTORITY\SYSTEM:(OI)(CI)(F) PRE-DEFINED\Administrators:(OI)(CI)(F) computername\username:(OI)(CI)(F) A college just told me that also the primary domain-controller (PDC) changed from Windows Server 2008 to Windows Server 2012. That also may be a reason. Any suggestions?

    Read the article

  • Logging to MySQL without empty rows/skipped records?

    - by Lee Ward
    I'm trying to figure out how to make Squid proxy log to MySQL. I know ACL order is pretty important but I'm not sure if I understand exactly what ACLs are or do, it's difficult to explain, but hopefully you'll see where I'm going with this as you read! I have created the lines to make Squid interact with a helper in squid.conf as follows: external_acl_type mysql_log %LOGIN %SRC %PROTO %URI php /etc/squid3/custom/mysql_lg.php acl ex_log external mysql_log http_access allow ex_log The external ACL helper (mysql_lg.php) is a PHP script and is as follows: error_reporting(0); if (! defined(STDIN)) { define("STDIN", fopen("php://stdin", "r")); } $res = mysql_connect('localhost', 'squid', 'testsquidpw'); $dbres = mysql_select_db('squid', $res); while (!feof(STDIN)) { $line = trim(fgets(STDIN)); $fields = explode(' ', $line); $user = rawurldecode($fields[0]); $cli_ip = rawurldecode($fields[1]); $protocol = rawurldecode($fields[2]); $uri = rawurldecode($fields[3]); $q = "INSERT INTO logs (id, user, cli_ip, protocol, url) VALUES ('', '".$user."', '".$cli_ip."', '".$protocol."', '".$uri."');"; mysql_query($q) or die (mysql_error()); if ($fault) { fwrite(STDOUT, "ERR\n"); }; fwrite(STDOUT, "OK\n"); } The configuration I have right now looks like this: ## Authentication Handler auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp auth_param ntlm children 30 auth_param negotiate program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-basic auth_param negotiate children 5 # Allow squid to update log external_acl_type mysql_log %LOGIN %SRC %PROTO %URI php /etc/squid3/custom/mysql_lg.php acl ex_log external mysql_log http_access allow ex_log acl localnet src 172.16.45.0/24 acl AuthorizedUsers proxy_auth REQUIRED acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl CONNECT method CONNECT acl blockeddomain url_regex "/etc/squid3/bl.acl" http_access deny blockeddomain deny_info ERR_BAD_GENERAL blockeddomain # Deny requests to certain unsafe ports http_access deny !Safe_ports # Deny CONNECT to other than secure SSL ports http_access deny CONNECT !SSL_ports # Allow the internal network access to this proxy http_access allow localnet # Allow authorized users access to this proxy http_access allow AuthorizedUsers # FINAL RULE - Deny all other access to this proxy http_access deny all From testing, the closer to the bottom I place the logging lines the less it logs. Oftentimes, it even places empty rows in to the MySQL table. The file-based logs in /var/log/squid3/access.log are correct but many of the rows in the access logs are missing from the MySQL logs. I can't help but think it's down to the order I'm putting lines in because I want to log everything to MySQL, unauthenticated requests, blocked requests, which category blocked a specific request. The reason I want this in MySQL is because I'm trying to have everything managed via a custom web-based frontend and want to avoid using any shell commands and access to system log files if I can help it. The end result is to make it as easy as possible to maintain without keeping staff waiting on the phone whilst I add a new rule and reload the server! Hopefully someone can help me out here because this is very much a learning experience for me and I'm pretty stumped. Many thanks in advance for any help!

    Read the article

  • Extended URL php

    - by web.ask
    I have a problem with the extended url here in php. I'm not even sure if it's called extended url. I was wondering if someone can help me since i'm no good with PHP. I have a php page the careers.php it shows the career listing but when I click on Learn more link it goes to http(dot)localhost/houseofpatel/jobs/4/Career-4 but it outputs an "Object not found". Does xampplite have something to do with this? or is it just the codes? I also have the breadcrumb.php in here. Please see code below: session_start(); include("Breadcrumb.php"); $trail = new Breadcrumb(); $trail->add('Careers', $_SERVER['PHP_SELF'], 1); $trail -> output(); # REMOVE SPECIAL CHARACTERS $sPattern = array('/[^a-zA-Z0-9 -]/', '/[ -]+/', '/^-|-$/'); $sReplace = array('', '-', ''); mysql_query("SET CHARACTER_SET utf8"); //$query = "SELECT * FROM careers_mst WHERE status = 1 ORDER BY dateposted DESC LIMIT 0, 4"; if(!$_GET['jobs']){ $jobsPerPage = 4; }else{ $jobsPerPage = $_GET['jobs']; } if(!$_GET['page']){ $currPage = 1; $showMoreJobs = true; }else{ $currPage = $_GET['page']; $showMoreJobs = false; } $offset = ($currPage - 1 )* $jobsPerPage; $query = "SELECT * FROM careers_mst WHERE status = 1 ORDER BY dateposted"; $process = mysql_query($query); if(@mysql_num_rows($process) > 0){ $totalRows = @mysql_num_rows($process); $query2 = "SELECT * FROM careers_mst WHERE status = 1 ORDER BY dateposted DESC LIMIT $offset, $jobsPerPage"; $process2 = mysql_query($query2); if(@mysql_num_rows($process2) > 0){ $pagNav = pageNavigator($totalRows,$jobsPerPage,$currPage,''); while($row = @mysql_fetch_array($process2)){ $id = $row[0]; $title = stripslashes($row['title']); $title1 = preg_replace($sPattern, $sReplace, $title); $description = stripslashes($row['description']); $description1 = substr($description, 0, 200); $careers_list .= ' <div class="left" style="width:340px; padding-left:10px;"> <p><span class="blue"><strong>'.$title.'</strong></span></p> '.$description1.'... <p><a href="jobs/'.$id.'/'.$title1.'" title="Learn more">[ Learn More ]</a></p> </div>'; } /*$careers_list .= ' <div class="left" style="width: 100%; margin-top:15px;"> <p><a href="more-jobs" title="More Job Opennings">MORE JOB OPENNINGS >> </a></p> <div class="divider left"></div> </div>'; */ if($showMoreJobs){ if($totalRows > $jobsPerPage){ $currPage++; } $jobNav = "<a href=careers.php?page=$currPage title='More Job Opennings'>MORE JOB OPENNINGS >> </a>"; }else{ $jobNav = "$pagNav"; } $careers_list .= "<div class='left' style='width: 100%; margin-top:15px;'> <p>$jobNav</p> <div class='divider left'></div> </div>"; } }

    Read the article

  • Cannot read status the monit daemon, even with allowed group

    - by jefflunt
    I cannot seem to get monit status or other CLI commands to work. I've built monit v5.8 to run on a Raspberry Pi. I'm able to add services to be monitored, and the web interface can be accessed just fine, as I've set it up for public read-only access (it's a test server, not my final production setup, so not a big deal right now). Problem is, when I run monit status while logged in as root I get: # monit status monit: cannot read status from the monit daemon I also have monit started on boot via this /etc/inittab file entry: mo:2345:respawn:/usr/local/bin/monit -Ic /etc/monitrc I've verified that monit is running, and I'm getting email alerts anytime I either kill the monit process manually, or reboot my raspberry pi. So, next I check my monitrc file permissions to see which group is allowed access. # ls -al /etc/monitrc -rw------- 1 root root 2359 Aug 24 14:48 /etc/monitrc Here's my relevant allow section of the control file. set httpd port 80 allow [omitted] readonly allow @root allow localhost allow 0.0.0.0/0.0.0.0 Also tried setting permissions on this file to 640 to allow group read permissions, but no matter what I try I either get the same error as noted above, or when the permissions are set to 640 I get: # monit status monit: The control file '/etc/monitrc' must have permissions no more than -rwx------ (0700); right now permissions are -rw-r----- (0640). What am I missing here? I know that the httpd must be enabled, as that's the interface that the CLI uses to get information (or so I've read), so I've done that. And in terms of monit doing its monitoring job and sending email alerts, that's all working as well. Here's my entire monitrc file - again, this is version v5.8, and it was build with both PAM and SSL support. The process runs under the root user: # Global settings set daemon 300 with start delay 5 set logfile /var/log/monit.log set pidfile /var/run/monit.pid set idfile /var/run/.monit.id set statefile /var/run/.monit.state # Mail alerts ## Set the list of mail servers for alert delivery. Multiple servers may be ## specified using a comma separator. If the first mail server fails, Monit # will use the second mail server in the list and so on. By default Monit uses # port 25 - it is possible to override this with the PORT option. # set mailserver smtp.gmail.com port 587 username [omitted] password [omitted] using tlsv1 ## Send status and events to M/Monit (for more informations about M/Monit ## see http://mmonit.com/). By default Monit registers credentials with ## M/Monit so M/Monit can smoothly communicate back to Monit and you don't ## have to register Monit credentials manually in M/Monit. It is possible to ## disable credential registration using the commented out option below. ## Though, if safety is a concern we recommend instead using https when ## communicating with M/Monit and send credentials encrypted. # # set mmonit http://monit:[email protected]:8080/collector # # and register without credentials # Don't register credentials # # ## Monit by default uses the following format for alerts if the the mail-format ## statement is missing:: set mail-format { from: [email protected] subject: $SERVICE $DESCRIPTION message: $EVENT Service: $SERVICE Date: $DATE Action: $ACTION Host: $HOST Description: $DESCRIPTION Monit instance provided by chicagomeshnet.com } # Web status page set httpd port 80 allow [omitted] readonly allow @root allow localhost allow 0.0.0.0/0.0.0.0 ## You can set alert recipients whom will receive alerts if/when a ## service defined in this file has errors. Alerts may be restricted on ## events by using a filter as in the second example below.

    Read the article

  • NTFS Permissions - Access Denied even though Explicit Allow and no Deny

    - by chris613
    I'm hoping someone can help me with this NTFS permissions problem. The short version is that I can't write a new file in F:\SomeDir even though I seem to be granted full permissions via both the "Domain Admins" group and a second unprivileged group. The "Effective Permissions" tab in the explorer permissions UI shows that I have full control, and there are no "Deny"s anywhere in the ACL or anything else that looks unusual. I am logged into the machine over RDP and accessing the disk directly, not through a share. F:\SomeDir>set U USERDNSDOMAIN=THEOFFICE.LOCAL USERDOMAIN=THEOFFICE USERNAME=thisisme USERPROFILE=C:\Users\thisisme F:\SomeDir>icacls . . BUILTIN\Administrators:(I)(F) CREATOR OWNER:(I)(OI)(CI)(IO)(F) THEOFFICE\Domain Admins:(I)(OI)(CI)(F) NT AUTHORITY\SYSTEM:(I)(OI)(CI)(F) BUILTIN\Administrators:(I)(OI)(CI)(IO)(F) BUILTIN\Users:(I)(OI)(CI)(RX) Successfully processed 1 files; Failed processing 0 files F:\SomeDir>net group /domain "Domain Admins" The request will be processed at a domain controller for domain THEOFFICE.local. Group name Domain Admins Comment Designated administrators of the domain Members ------------------------------------------------------------------------------- Administrator thatguy thisisme The command completed successfully. F:\SomeDir>echo "whyUNoCreateFile?" > whyUNoCreateFile.txt Access is denied. I searched for answers and came across similar problems that lead to UAC (ex. Why does removing the EVERYONE group prevent domain admins from accessing a drive? ). I can't turn off UAC at the moment, so I try a "regular" group that I'm also part of. This group has no special rights assignments and is not part of any administrative groups. Still no dice: [***** This one command executed in an elevated shell *****] F:\SomeDir>icacls . /grant THEOFFICE\iteveryone:(OI)(CI)F processed file: . Successfully processed 1 files; Failed processing 0 files F:\SomeDir>net group /domain "iteveryone" The request will be processed at a domain controller for domain THEOFFICE.local. Group name ITeveryone Comment Members ------------------------------------------------------------------------------- Administrator thatguy thisisme otherguy someitguy The command completed successfully. F:\ScanningVMsForIBM>echo y > u Access is denied. As you can see, using a "regular" group didn't help. I have logged out and back in to the server to ensure my login token is up to date, and at any rate I belonged to these groups before the server was created. If I grant explicit permission to myself, it does allow me to write files: [***** This one command executed in an elevated shell *****] F:\SomeDir>icacls . /grant THEOFFICE\thisisme:(OI)(CI)F processed file: . Successfully processed 1 files; Failed processing 0 files F:\SomeDir>echo y > u F:\SomeDir>type u y My requirement is for the "Domain Admins" group to have Full Control, or if that's not possible without disabling UAC, then a second group will do, but I can't get either to work. I'm really stumped. Can someone please point out what I could be overlooking?

    Read the article

  • dvd drive I/O error?

    - by bobby
    i get dis error evry time i burn a cd/dvd thru my dvd drive...!! Nero Burning ROM bobby 4C85-200E-4005-0004-0000-7660-0800-35X3-0000-407M-MX37-**** (*) Windows XP 6.1 IA32 WinAspi: - NT-SPTI used Nero Version: 7.11.3. Internal Version: 7, 11, 3, (Nero Express) Recorder: Version: UL01 - HA 1 TA 1 - 7.11.3.0 Adapter driver: HA 1 Drive buffer : 2048kB Bus Type : default CD-ROM: Version: 52PP - HA 1 TA 0 - 7.11.3.0 Adapter driver: HA 1 === Scsi-Device-Map === === CDRom-Device-Map === ATAPI-CD ROM-DRIVE-52MAX F: CdRom0 HL-DT-ST DVDRAM GSA-H12N G: CdRom1 ======================= AutoRun : 1 Excluded drive IDs: WriteBufferSize: 83886080 (0) Byte BUFE : 0 Physical memory : 958MB (981560kB) Free physical memory: 309MB (317024kB) Memory in use : 67 % Uncached PFiles: 0x0 Use Inquiry : 1 Global Bus Type: default (0) Check supported media : Disabled (0) 11.6.2010 CD Image 10:43:02 AM #1 Text 0 File SCSIPTICommands.cpp, Line 450 LockMCN - completed sucessfully for IOCTL_STORAGE_MCN_CONTROL 10:43:02 AM #2 Text 0 File Burncd.cpp, Line 3186 HL-DT-ST DVDRAM GSA-H12N Buffer underrun protection activated 10:43:02 AM #3 Text 0 File Burncd.cpp, Line 3500 Turn on Disc-At-Once, using CD-R/RW media 10:43:02 AM #4 Text 0 File DlgWaitCD.cpp, Line 307 Last possible write address on media: 359848 ( 79:59.73) Last address to be written: 318783 ( 70:52.33) 10:43:02 AM #5 Text 0 File DlgWaitCD.cpp, Line 319 Write in overburning mode: NO (enabled: CD) 10:43:02 AM #6 Text 0 File DlgWaitCD.cpp, Line 2988 Recorder: HL-DT-ST DVDRAM G SA-H12N; CDR co de: 00 97 27 18; O SJ entry from: Pla smon Data systems Ltd. ATIP Data: Special Info [hex] 1: D0 00 A0, 2: 61 1B 12 (LI 97:27.18), 3: 4F 3B 4A ( LO 79:59.74) Additional Info [hex] 1: 00 00 00 (invalid), 2: 00 00 00 (invalid), 3: 00 0 0 00 (invalid) 10:43:02 AM #7 Text 0 File DlgWaitCD.cpp, Line 493 Protocol of DlgWaitCD activities: TRM_DATA_MODE1, 2048, config 0, wanted index0 0 blocks, length 318784 blo cks [G: HL-DT-ST DVDRAM GSA-H12N] -------------------------------------------------------------- 10:43:02 AM #9 Text 0 File ThreadedTransferInterface.cpp, Line 986 Prepare [G: HL-DT-ST DVDRAM GSA-H12N] for write in CUE-sheet-DAO DAO infos: ========== MCN: "" TOCType: 0x00; Se ssion Clo sed, disc fixated Tracks 1 to 1: Idx 0 Idx 1 Next T rk 1: TRM_DATA_MODE1, 2048/0x00, FilePos 0 307200 6531768 32, ISRC "" DAO layout: =========== ___Start_|____Track_|_Idx_|_CtrlAdr_|_____Size_|______NWA_|_RecDep__________ -150 | lead-in | 0 | 0x41 | 0 | 0 | 0x00 -150 | 1 | 0 | 0x41 | 0 | 0 | 0x00 0 | 1 | 1 | 0x41 | 318784 | 318784 | 0x00 318784 | lead-out | 1 | 0x41 | 0 | 0 | 0x00 10:43:02 AM #10 Text 0 File SCSIPTICommands.cpp, Line 240 SPTILockVolume - completed successfully for FSCTL_LOCK_VOLUME 10:43:02 AM #11 Text 0 File Burncd.cpp, Line 4286 Caching options: cache CDRom or Network-Yes, small files-Yes ( ON 10:43:03 AM #19 Text 0 File MMC.cpp, Line 18034 CueData, Len=32 41 00 00 14 00 00 00 00 41 01 00 10 00 00 00 00 41 01 01 10 00 00 02 00 41 aa 01 14 00 46 34 22 10:43:03 AM #20 Text 0 File ThreadedTransfer.cpp, Line 268 Pipe memory size 83836800 10:43:16 AM #21 Text 0 File Cdrdrv.cpp, Line 1405 10:43:16.806 - G: HL-DT-ST DVDRAM GSA-H12N : Queue again later 10:43:42 AM #22 SPTI -1502 File SCSIPassThrough.cpp, Line 181 CdRom1: SCSIStatus(x02) WinError(0) NeroError(-1502) Sense Key: 0x04 (KEY_HARDWARE_ERROR) Nero Report 2 Nero Burning ROM Sense Code: 0x08 Sense Qual: 0x03 CDB Data: 0x2A 00 00 00 4D 00 00 00 20 00 00 00 Sense Area: 0x70 00 04 00 00 00 00 10 53 29 A1 80 08 03 Buffer x0c7d9a40: Len x10000 0xDC 87 EB 41 6E AC 61 5A 07 B2 DB 78 B5 D4 D9 24 0x8D BC 51 38 46 56 0F EE 16 15 5C 5B E3 B0 10 16 0x14 B1 C3 6E 30 2B C4 78 15 AB D5 92 09 B7 81 23 10:43:42 AM #23 CDR -1502 File Writer.cpp, Line 306 DMA-driver error, CRC error G: HL-DT-ST DVDRAM GSA-H12N 10:43:55 AM #24 Phase 38 File dlgbrnst.cpp, Line 1767 Burn process failed at 48x (7,200 KB/s) 10:43:55 AM #25 Text 0 File SCSIPTICommands.cpp, Line 287 SPTIDismountVolume - completed successfully for FSCTL_DISMOUNT_VOLUME 10:44:01 AM #26 Text 0 File Cdrdrv.cpp, Line 11412 DriveLocker: UnLockVolume completed 10:44:01 AM #27 Text 0 File SCSIPTICommands.cpp, Line 450 UnLockMCN - completed sucessfully for IOCTL_STORAGE_MCN_CONTROL Existing drivers: Registry Keys: HKLM\Software\Microsoft\Windows NT\CurrentVersion\WinLogon Nero Report 3

    Read the article

  • Exchange Server 2007 Setup

    - by AlamedaDad
    Hi, I'm working on a upgrade to Exchange 2007 and I wanted to get some advise on hardware choices. We currently have an Exchange 2003 STD server with 400 users split between 6 AD Sites, that is housed on a single server. We need to move to a redundant, fault tolerant system to support our users. I'm planning on installing 2 Dell 1950 servers with W2k8-std to act as CAS and Hub servers, with NLB to allow abstraction of the actual server name to the users. There won't be an edge system since we have a Barracuda box already that will handle in/out spam/virus filtering. Backend I'm planning on 2 mailbox servers which will be Dell 2950s with 16GB RAM, 2 either dual-core or quad-core CPUs and 6 300GB SAS drives in some RAID config. These systems will be clustered using W2k8 Ent clustering and running CCR in Exchange. My questions are as follows: Is 16GB enough RAM for serving that many mailboxes along with the windows clustering and ccr? I'm trying to figure out disk layouts and I'm unsure of whether to use all local disk or some local and some SAN, via an OpenFiler iSCSI server. The SAN would be a Dell 2850 with 6 - 300GB SCSI drives and a PERC controller to slice as I want, with 8GB RAM. Option 1: 2 drives, RAID 1 - OS 2 drives, RAID 1 - Logs 2 drives, RAID 1 - Mail stores Option 2: 2 drives, RAID 1 - OS and logs 4 drives, RAID 5 - Mail Stores and scratch space for eseutil. Option 3: 2 drives, RAID 1 - OS 2 drives, RAID 1 - Logs 2 drives, RAID 0 - scratch space ~300GB iSCSI volume for mail stores Option 4: 2 drives, RAID 1 - OS 4 drives, RAID 5 - scratch space ~300GB iSCSI volume for mail stores ~300GB iSCSI volume for logs I have 2 sockets for CPUs and need to chose between dual and quad cores. The dual core have faster clocks but less cache and I'm thinking older architecture. Am I better off with more cores and cache while sacraficing clock speed? I am planning on adding the new E2K7 cluster to the E2K3 server and then move each mailbox over, all at once, then remove the old server. This seems more complicated than simply getting rid of the 2003 server and then adding the 2007 cluster and restoring the mailboxes using PowerControls or exmerge. The migration option lets me do this on my time, where a cutover means it all needs to work at once. If I go with the cutover method, how can I prebuild the servers and add them to the domain right after removing the 2003 server, or can't I? I think the answer is no and the migration is my only real option if I want to prebuild. I need to also migrate about 30GB of Public Folders. Is there anything special about this, other than specifying in the E2K7 install that I want older Outlook clients and PF's setup? I guess I could even keep the E2K3 server to host just the PFs? Lastly, if I have a mix of Outlook 200, 2003 and 2007 what do I need to do to make sure they all have access to the GAL and OAB? At time of cutover, we'll be at like 90% 2007, but we will have some older stuff around. My plan is to use Outlook Anywhere on laptops that are used outside the physical network. Are there any gotchas involved in that? I'm even thinking about using is for all Outlook clients, does anyone do that? The reason I'm considering it is that our WAN is really VPN tunnels over internet connections, so not a fully messhed, stable WAN. Thank you all very much for the assistance in advance and I look forward to discussion of these points! Regards...Michael

    Read the article

< Previous Page | 357 358 359 360 361 362 363 364 365 366 367 368  | Next Page >