Search Results

Search found 8840 results on 354 pages for 'drupal developers'.

Page 332/354 | < Previous Page | 328 329 330 331 332 333 334 335 336 337 338 339  | Next Page >

  • Remote XP -> Win98 WMI Connection

    - by Logan Young
    I've asked this on Technet, but because Win98 is no longer supported, I can't get any decent information, I was hoping there might be some "old school" developers here who might be able to help me. There is an application that we use a lot at work. This application should run 8am-5pm with as little interruption as possible. Most of the computers where this application runs are using Win98, and we have no way to upgrade them because we can't buy new hardware at the moment. My computer is running WinXP, so I thought of a way to make sure that this application runs all the time: The idea I had was to develop a Windows Service that executes a VBScript file that contains a WMI query to get a list of processes from each computer. Each list is then examined, and, depending on whether or not the target application is running, it will either do nothing, or it will execute another VBScript file that contains a WMI query that will be used to start the target application remotely. I later found a way to do this all with 1 VBScript file (see code below) My problem is in the remote connection to the target computers. I've installed WMI Core 1.5 on them, but every time I try the remote connection, I get the following: The remote server is unavailable or does not exist: 'GetObject' VBScript runtime error 800A01CE I've done some research, and all I've found is info about DCOM Config and Windows Firewall, but Win98 doesn't have either of these. ' #### Variables and constants #### Const HIDDEN_WINDOW = 12 Dim T ' #### End Variables and constants #### Main() Sub Main() ' #### Get Process information from WMI Computer = "." Set WMI = GetObject("winmgmts:" & _ "{ImpersonationLevel=Impersonate}!\\" & Computer & "\root\cimv2") Set Settings = WMI.ExecQuery("SELECT * FROM Win32_Process") For Each Process In Settings ' #### If the application is found to be running, set a value to indicate this If Process.Name = "NOTEPAD.EXE" Then T = True End If Next ' #### T will only have a value if the application is not running. We therefore ' #### evaluate it to determine if it has a value or not. If not, start the application If Not T Then 'MsgBox("Application not found.") Set Startup = WMI.Get("Win32_ProcessStartup") Set Config = Startup.SpawnInstance_ Config.ShowWindow = HIDDEN_WINDOW Set Process = GetObject("winmgmts:root\cimv2:Win32_Process") errReturn = Process.Create(_ "C:\Windows\notepad.exe", null, Config, intProcessID) End If End Sub This uses WMI to get the list of processes from the local computer and, if the target application is running, it'll do nothing, otherwise it'll forcefully start the target application. The problem is that this works only if I specify the local comuter, if I target another computer, I get the error mentioned above. Does anyone have any ideas? Thanks in advance for the help!

    Read the article

  • Android - binding to service

    - by tommy
    Hi: I can't seem to get an activity to bind to a service in the same package. The activity looks like this: public class myApp extends TabActivity { static private String TAG = "myApp"; private myService mService = null; private ServiceConnection mServiceConn = new ServiceConnection(){ public void onServiceConnected(ComponentName name, IBinder service) { Log.v(TAG, "Service: " + name + " connected"); mService = ((myService.myBinder)service).getService(); } public void onServiceDisconnected(ComponentName name) { Log.v(TAG, "Service: " + name + " disconnected"); } }; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); doBind(); Log.i(TAG, "Started (UI Thread)"); // set content setContentView(R.layout.main); Resources res = getResources(); // Resource object to get Drawables TabHost tabHost = getTabHost(); // The activity TabHost ... add some tabs here.... tabHost.setCurrentTab(0); } private void doBind(){ Intent i = new Intent(this,myService.class); if( bindService(i, mServiceConn, 0 )){ Log.i(TAG, "Service bound"); } else { Log.e(TAG, "Service not bound"); } } } Then the service: public class myService extends Service { private String TAG = "myService"; private boolean mRunning = false; @Override public int onStartCommand(Intent intent, int flags, int startid) { Log.i(TAG,"Service start"); mRunning = true; Log.d(TAG,"Finished onStartCommand"); return START_STICKY; } /* * Called on service stop */ @Override public void onDestroy(){ Log.i(TAG,"onDestroy"); mRunning = false; super.onDestroy(); } @Override public IBinder onBind(Intent intent) { return mBinder; } boolean isRunning() { return mRunning; } /* * class for binding */ private final IBinder mBinder = new myBinder(); public class myBinder extends Binder { myService getService() { return myService.this; } } } bindService returns true, but onServiceConnection is never called (mService is always null, so I can't do something like mService.isRunning() ) The manifest entry for the service is just: <service android:name=".myService"></service> I was copying the code direct from the Android developers site, but I must have missed something.

    Read the article

  • how to upload a audio file using REST webservice in Google App Engine for Java

    - by sathya
    Am using google app engine with eclipse IDE and trying to upload a audio file. I used the File Upload in Google App Engine For Java and can able to upload the file successfully. Now am planning to use REST web service for it. I had analyzed in developers.google but i failed. Can anyone suggest me how to implement REST Web services in google app engine using Eclipse. The code google provided is shown below, // file Upload.java public class Upload extends HttpServlet { private BlobstoreService blobstoreService = BlobstoreServiceFactory.getBlobstoreService(); public void doPost(HttpServletRequest req, HttpServletResponse res) throws ServletException, IOException { Map<String, BlobKey> blobs = blobstoreService.getUploadedBlobs(req); BlobKey blobKey = blobs.get("myFile"); if (blobKey == null) { res.sendRedirect("/"); } else { res.sendRedirect("/serve?blob-key=" + blobKey.getKeyString()); }}} // file Serve.java public class Serve extends HttpServlet { private BlobstoreService blobstoreService = BlobstoreServiceFactory.getBlobstoreService(); public void doGet(HttpServletRequest req, HttpServletResponse res) throws IOException { BlobKey blobKey = new BlobKey(req.getParameter("blob-key")); blobstoreService.serve(blobKey, res); }} // file index.jsp <%@ page import="com.google.appengine.api.blobstore.BlobstoreServiceFactory" %> <%@ page import="com.google.appengine.api.blobstore.BlobstoreService" %> <% BlobstoreService blobstoreService = BlobstoreServiceFactory.getBlobstoreService(); %> <form action="<%= blobstoreService.createUploadUrl("/upload") %>" method="post" enctype="multipart/form-data"> <input type="file" name="myFile"> <input type="submit" value="Submit"> </form> // web.xml <servlet> <servlet-name>Upload</servlet-name> <servlet-class>Upload</servlet-class> </servlet> <servlet> <servlet-name>Serve</servlet-name> <servlet-class>Serve</servlet-class> </servlet> <servlet-mapping> <servlet-name>Upload</servlet-name> <url-pattern>/upload</url-pattern> </servlet-mapping> <servlet-mapping> <servlet-name>Serve</servlet-name> <url-pattern>/serve</url-pattern> </servlet-mapping> Now how to provide a rest web service for the above code. Kindly suggest me an idea.

    Read the article

  • Android Camera takePicture function does not call Callback function

    - by Tomáš 'Guns Blazing' Frcek
    I am working on a custom Camera activity for my application. I was following the instruction from the Android Developers site here: http://developer.android.com/guide/topics/media/camera.html Everything seems to works fine, except the Callback function is not called and the picture is not saved. Here is my code: public class CameraActivity extends Activity { private Camera mCamera; private CameraPreview mPreview; private static final String TAG = "CameraActivity"; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.camera); // Create an instance of Camera mCamera = getCameraInstance(); // Create our Preview view and set it as the content of our activity. mPreview = new CameraPreview(this, mCamera); FrameLayout preview = (FrameLayout) findViewById(R.id.camera_preview); preview.addView(mPreview); Button captureButton = (Button) findViewById(R.id.button_capture); captureButton.setOnClickListener(new OnClickListener() { @Override public void onClick(View v) { Log.v(TAG, "will now take picture"); mCamera.takePicture(null, null, mPicture); Log.v(TAG, "will now release camera"); mCamera.release(); Log.v(TAG, "will now call finish()"); finish(); } }); } private PictureCallback mPicture = new PictureCallback() { @Override public void onPictureTaken(byte[] data, Camera camera) { Log.v(TAG, "Getting output media file"); File pictureFile = getOutputMediaFile(); if (pictureFile == null) { Log.v(TAG, "Error creating output file"); return; } try { FileOutputStream fos = new FileOutputStream(pictureFile); fos.write(data); fos.close(); } catch (FileNotFoundException e) { Log.v(TAG, e.getMessage()); } catch (IOException e) { Log.v(TAG, e.getMessage()); } } }; private static File getOutputMediaFile() { String state = Environment.getExternalStorageState(); if (!state.equals(Environment.MEDIA_MOUNTED)) { return null; } else { File folder_gui = new File(Environment.getExternalStorageDirectory() + File.separator + "GUI"); if (!folder_gui.exists()) { Log.v(TAG, "Creating folder: " + folder_gui.getAbsolutePath()); folder_gui.mkdirs(); } File outFile = new File(folder_gui, "temp.jpg"); Log.v(TAG, "Returnng file: " + outFile.getAbsolutePath()); return outFile; } } After clicking the Button, I get logs: "will now take picture", "will now release camera" and "will now call finish". The activity finishes succesfully, but the Callback function was not called during the mCamera.takePicture(null, null, mPicture); function (There were no logs from the mPicture callback or getMediaOutputFile functions) and there is no file in the location that was specified. Any ideas? :) Much thanks!

    Read the article

  • how to use method in AsyncTask in android?

    - by J.R.P
    In my application use JASON webservice to get data from Google Navigarion api. I use the Code is below. i got Exception android.os.NetworkOnMainThreadException. how to use AsyncTask? here is my code. Thanks.`public class MainActivity extends MapActivity { MapView mapView ; public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); System.out.println("*************1**************1"); setContentView(R.layout.activity_main); System.out.println("*************2**************"); mapView = (MapView) findViewById(R.id.mapv); System.out.println("*************3**************"); Route route = directions(new GeoPoint((int)(26.2*1E6),(int)(50.6*1E6)), new GeoPoint((int)(26.3*1E6),(int)(50.7*1E6))); RouteOverlay routeOverlay = new RouteOverlay(route, Color.BLUE); mapView.getOverlays().add(routeOverlay); mapView.invalidate(); System.out.println("*************4**************"); } @SuppressLint("ParserError") private Route directions(final GeoPoint start, final GeoPoint dest) { //https://developers.google.com/maps/documentation/directions/#JSON <- get api String jsonURL = "http://maps.googleapis.com/maps/api/directions/json?"; final StringBuffer sBuf = new StringBuffer(jsonURL); sBuf.append("origin="); sBuf.append(start.getLatitudeE6()/1E6); sBuf.append(','); sBuf.append(start.getLongitudeE6()/1E6); sBuf.append("&destination="); sBuf.append(dest.getLatitudeE6()/1E6); sBuf.append(','); sBuf.append(dest.getLongitudeE6()/1E6); sBuf.append("&sensor=true&mode=driving"); Parser parser = new GoogleParser(sBuf.toString()); Route r = parser.parse(); System.out.println("********r in thread*****" +r); return r; } @Override public boolean onCreateOptionsMenu(Menu menu) { getMenuInflater().inflate(R.menu.activity_main, menu); return true; } @Override protected boolean isRouteDisplayed() { // TODO Auto-generated method stub return false; } } `

    Read the article

  • Solution Output Directory

    - by L.E.O
    The project that I'm currently working on is being developed by multiple teams where each team is responsible for different part of the project. They all have set up their own C# projects and solutions with configuration settings specific to their own needs. However, now we need to create another, global solution, which will combine and build all projects into the same output directory. The problem that I have encountered though, is that I have found only one way to make all projects build into the same output directory - I need to modify configurations for all of them. That is what we would like to avoid. We would prefer that all these projects had no knowledge about this "global" solution. Each team must retain possibility to work just with their own sub-solution. One possible workaround is to create a special configuration for all projects just for this "global" solution, but that could create extra problems since now you have to constantly sync this configuration settings with the regular one, used by that specific team. Last thing we want to do is to spend hours trying to figure out why something doesn't work when building under global solution just because of some check box that developers have checked in their configuration, but forgot to do so in the global configuration. So, to simplify, we need some sort of output directory setting or post build event that would only be present when building from that global, all-inclusive solution. Is there any way to achieve this without changing something in projects configurations? Update 1: Some extra details I guess I need to mention: We need this global solution to be as close as possible to what the end user gets when he installs our application, since we intend to use it for debugging of the entire application when we need to figure out which part of the application isn't working before sending this bug to the team working on that part. This means that when building under global solution the output directory hierarchy should be the same as it would be in Program Files after installation, so that if, for example, we have Program Files/MyApplication/Addins folder which contains all the addins developed by different teams, we need the global solution to copy the binaries from addins projects and place them in the output directory accordingly. The thing is, the team developing an addin doest necessary know that it is an addin and that it should be placed in that folder, so they cannot change their relative output directory to be build/bin/Debug/Addins.

    Read the article

  • Defined variables and arrays vs functions in php

    - by Frank Presencia Fandos
    Introduction I have some sort of values that I might want to access several times each page is loaded. I can take two different approaches for accessing them but I'm not sure which one is 'better'. Three already implemented examples are several options for the Language, URI and displaying text that I describe here: Language Right now it is configured in this way: lang() is a function that returns different values depending on the argument. Example: lang("full") returns the current language, "English", while lang() returns the abbreviation of the current language, "en". There are many more options, like lang("select"), lang("selectact"), etc that return different things. The code is too long and irrelevant for the case so if anyone wants it just ask for it. Url The $Url array also returns different values depending on the request. The whole array is fully defined in the beginning of the page and used to get shorter but accurate links of the current page. Example: $Url['full'] would return "http://mypage.org/path/to/file.php?page=1" and $Url['file'] would return "file.php". It's useful for action="" within the forms and many other things. There are more values for $Url['folder'], $Url['file'], etc. Same thing about the code, if wanted, just request it. Text [You can skip this section] There's another array called $Text that is defined in the same way than $Url. The whole array is defined at the beginning, making a mysql call and defining all $Text[$i] for current page with a while loop. I'm not sure if this is more efficient than multiple calls for a single mysql cell. Example: $Text['54'] returns "This is just a test array!" which this could perfectly be implemented with a function like text(54). Question With the 3 examples you can see that I use different methods to do almost the same function (no pun intended), but I'm not sure which one should become the standard one for my code. I could create a function called url() and other called text() to output what I want. I think that working with functions in those cases is better, but I'm not sure why. So I'd really appreciate your opinions and advice. Should I mix arrays and functions in the way I described or should I just use funcions? Please, base your answer in this: The source needs to be readable and reusable by other developers Resource consumption (processing, time and memory). The shorter the code the better. The more you explain the reasons the better. Thank you PS, now I know the differences between $Url and $Uri.

    Read the article

  • Do you use logical negation operator (!) in "if" statement or check on "== false"

    - by Taras Terebkov
    Hello everyone, I just want to conduct a little survey about code style developers prefer. For me there are two ways to write "if" in such languages as Java, C#, C++, etc. (1) Logical negation operator public void foo() { if (!SessionManager.getInstance().hasActiveSession()) { . . . . . } } (2) Check on "false" public void foo() { if (SessionManager.getInstance().hasActiveSession() == false) { . . . . . } } I always believe that first way is much worst then the second one. Cause usually you don't "read" the code, but "recognize" it in one brief look. And exclamation symbol slipped from your mind, just disturbing you somewhere on the bottom of your unconscious. And only during reading the "if" block below you understand, that the logic is opposite - no sessions in "if" On the other hand in the second way of writing, an eye immediately catches words "SessionManager", "hasActiveSession" and "false". Also for me, the situation with "true" is different. In code like class SessionManager { private bool hasSession; public void foo() { if (hasSession == true) { . . . . . } else { . . . . . } } } I find "true" superfluous. why we repeating the sentence two times? The following is shorter and quicker to catch. class SessionManager { private bool hasSession; public void foo() { if (hasSession) { . . . . . } else { . . . . . } } } What do YOU think, guys?

    Read the article

  • Best way to have common class shared by both C++ and Ruby?

    - by shuttle87
    I am currently working on a project where a team of us are designing a game, all of us are proficient in ruby and some (but not all) of us are proficient in c++. Initially we made the backend in ruby but we ported it to c++ for more speed. The c++ port of the backend has exactly the same features and algorithms as the original ruby code. However we still have a bunch of code in ruby that does useful things but we want it to now get the data from the c++ classes. Our first thought was that we could save some of the data structures in something like XML or redis and call that, but some of the developers don't like that idea. We don't need anything particularly complex data structures to be passed between the different parts of the code, just tuples, strings and ints. Is there any way of integrating the ruby code so that it can call the c++ stuff natively? Will we need to embed code? Will we have to make a ruby extension? If so are there any good resources/tutorials you could suggest? For example say we have this code in the c++ backend: class The_game{ private: bool printinfo; //print the player diagnostic info at the beginning if true int numplayers; std::vector<Player*> players; string current_action; int action_is_on; // the index of the player in the players array that the action is now on //more code here public: Table(std::vector<Player *> in_players, std::vector<Statistics *> player_stats ,const int in_numplayers); ~Table(); void play_game(); History actions_history; }; class History{ private: int action_sequence_number; std::vector<Action*> hand_actions; public: void print_history(); void add_action(Action* the_action_to_be_added); int get_action_sequence_number(){ return action_sequence_number;} bool history_actions_are_equal(); int last_action_size(int street,int number_of_actions_ago); History(); ~History(); }; Is there any way to natively call something in the actions_history via The_game object in ruby? (The objects in the original ruby code all had the same names and functionality) By this I mean: class MyRubyClass def method1(arg1) puts arg1 self.f() # ... but still available puts cpp_method.the_current_game.actions_history.get_action_sequence_number() end # Constructor: def initialize(arg) puts "In constructor with arg #{arg}" #get the c++ object here and call it cpp_method end end Is this possible? Any advice or suggestions are appreciated.

    Read the article

  • Why Swift is 100 times slower than C in this image processing test?

    - by xiaobai
    Like many other developers I have been very excited at the new Swift language from Apple. Apple has boasted its speed is faster than Objective C and can be used to write operating system. And from what I learned so far, it's a very type-safe language and able to have precisely control over the exact data type (like integer length). So it does look like having good potential handling performance critical tasks, like image processing, right? That's what I thought before I carried out a quick test. The result really surprised me. Here is a much simplified image alpha blending code snippet in C: test.c: #include <stdio.h> #include <stdint.h> #include <string.h> uint8_t pixels[640*480]; uint8_t alpha[640*480]; uint8_t blended[640*480]; void blend(uint8_t* px, uint8_t* al, uint8_t* result, int size) { for(int i=0; i<size; i++) { result[i] = (uint8_t)(((uint16_t)px[i]) *al[i] /255); } } int main(void) { memset(pixels, 128, 640*480); memset(alpha, 128, 640*480); memset(blended, 255, 640*480); // Test 10 frames for(int i=0; i<10; i++) { blend(pixels, alpha, blended, 640*480); } return 0; } I compiled it on my Macbook Air 2011 with the following command: gcc -O3 test.c -o test The 10 frame processing time is about 0.01s. In other words, it takes the C code 1ms to process one frame: $ time ./test real 0m0.010s user 0m0.006s sys 0m0.003s Then I have a Swift version of the same code: test.swift: let pixels = UInt8[](count: 640*480, repeatedValue: 128) let alpha = UInt8[](count: 640*480, repeatedValue: 128) let blended = UInt8[](count: 640*480, repeatedValue: 255) func blend(px: UInt8[], al: UInt8[], result: UInt8[], size: Int) { for(var i=0; i<size; i++) { var b = (UInt16)(px[i]) * (UInt16)(al[i]) result[i] = (UInt8)(b/255) } } for i in 0..10 { blend(pixels, alpha, blended, 640*480) } The build command line is: xcrun swift -O3 test.swift -o test Here I use the same O3 level optimization flag to make the comparison hopefully fair. However, the resulting speed is 100 time slower: $ time ./test real 0m1.172s user 0m1.146s sys 0m0.006s In other words, it takes Swift ~120ms to processing one frame which takes C just 1 ms. I also verified the memory initialization time in both test code are very small compared to the blend processing function time. What happened?

    Read the article

  • Identifying if a user is in the local administrators group

    - by Adam Driscoll
    My Problem I'm using PInvoked Windows API functions to verify if a user is part of the local administrators group. I'm utilizing GetCurrentProcess, OpenProcessToken, GetTokenInformationand LookupAccountSid to verify if the user is a local admin. GetTokenInformation returns a TOKEN_GROUPS struct with an array of SID_AND_ATTRIBUTES structs. I iterate over the collection and compare the user names returned by LookupAccountSid. My problem is that, locally (or more generally on our in-house domain), this works as expected. The builtin\Administrators is located within the group membership of the current process token and my method returns true. On another domain of another developer the function returns false. The LookupAccountSid functions properly for the first 2 iterations of the TOKEN_GROUPS struct, returning None and Everyone, and then craps out complaining that "A Parameter is incorrect." What would cause only two groups to work correctly? The TOKEN_GROUPS struct indicates that there are 14 groups. I'm assuming it's the SID that is invalid. Everything that I have PInvoked I have taken from an example on the PInvoke website. The only difference is that with the LookupAccountSid I have changed the Sid parameter from a byte[] to a IntPtr because SID_AND_ATTRIBUTESis also defined with an IntPtr. Is this ok since LookupAccountSid is defined with a PSID? LookupAccountSid PInvoke [DllImport("advapi32.dll", CharSet = CharSet.Auto, SetLastError = true)] static extern bool LookupAccountSid( string lpSystemName, IntPtr Sid, StringBuilder lpName, ref uint cchName, StringBuilder ReferencedDomainName, ref uint cchReferencedDomainName, out SID_NAME_USE peUse); Where the code falls over for (int i = 0; i < usize; i++) { accountCount = 0; domainCount = 0; //Get Sizes LookupAccountSid(null, tokenGroups.Groups[i].SID, null, ref accountCount, null, ref domainCount, out snu); accountName2.EnsureCapacity((int) accountCount); domainName.EnsureCapacity((int) domainCount); if (!LookupAccountSid(null, tokenGroups.Groups[i].SID, accountName2, ref accountCount, domainName, ref domainCount, out snu)) { //Finds its way here after 2 iterations //But only in a different developers domain var error = Marshal.GetLastWin32Error(); _log.InfoFormat("Failed to look up SID's account name. {0}", new Win32Exception(error).Message); continue; } If more code is needed let me know. Any help would be greatly appreciated.

    Read the article

  • Database advantages? Access, MySQL, msSQL, or any others?

    - by JimZ
    Dear all Stackoverflowers, I just started to learn programming and now I'm putting this question online based on a quote: no question is silly My work needs to develop a order system based on web, which wants a database system. Since using Excel for years as a general office user, I naturally turn this to Access. However, most people say Access is very limited comparing to MySQL or MSSQL, or any other more professional database system. But after developing some functions for my company's order system, I really find Access can fulfill my request. And I also tried MSSQL to develop, which I found it not quite convenient to use. I have searched in stackoverflow and find no general answer about my doubt. Now I am sincerely hoping some experienced and professional developers could clear my doubts. Now I'm listing some Access advantages, which I don't think other database system have. I hope you could help me also find these advantages in others. 1. Access is portable, I can just copy a xxx.accdb file to my company and continue with development. 2. Access is easy to generate helpful table, for example, it will automatically generate a field that can automatically count, could be used as primary key value. 3. it is more compatable with Excel, to display and filter data. 4. importantly, it nerely needs no environment to setup, just needs MS Office to be installed. ............others However, I also find some points that MSSQL is advantaged: 1. security reasons 2. easy to backup, ( just use BACKUP..... sql statement to do it) 3. can edit stored procedure to save some functions to database ...............others specifically, I wish some friends could tell me how to make other database portable? since I usually work both at home and in office. It's a headache to move MSSQL work to my office, since the version of MSSQL is not the same. Thank you all and best regards, :)

    Read the article

  • VB.Net 2008 IDE hanging - MSVB7.dll eating 100% CPU when editing code

    - by Andrew Backer
    I am having a problem with msvb7.dll eating 50%+ cpu on my dual core system. This usually lasts 10-30 seconds or so, during which time the IDE is non-responsive. This occurs when I do pretty much anything in the text editor, and can be replicated by simply adding blank lines to a function, and then deleting them. Or pasting some code. Or... lotsa stuff. SP1 installed I had DevExpress' refactor/coderush, components, and codeit.right installed, but have removed all 3 of them. (I had installed the latest version of Refactor Pro! (9.3.4), perhaps the day before) I have tried a VS.NET Repair. There is a kb that referenced some cpu destroying with vb, but it was included in SP1 Also: The solution consists of ~30 VB projects and 2 C# projects 8 other developers aren't having any issues with this (or at least not the SAME issues, we all have em) Clean get from TFS was done Project builds properly, can can even debug. This doesn't seem to happen on really small solutions, but perhaps it does and it just goes away super quick. Any clues at all as to what might be causing this, or how to fix it? I REALLY don't want to lose another day uninstalling and reinstalling and patching and so on =) If that even fixes it. Here is the stack trace (process explorer) that I get from the threads window when the msvb7.dll is churning. --- title in process explorer [threads] tab for process -------- cpu:49.28% cswitch delta: 300 to 3500 startaddress: [msvb7.dll+0x4218c] msvb7.dll version: 9.0.30729.1 --- actual stack trace ------- ntkrnlpa.exe!KiUnexpectedInterrupt+0x121 ntkrnlpa.exe!ZwYieldExecution+0x1c56 ntkrnlpa.exe!KiDispatchInterrupt+0x72e NDIS.sys!NdisFreeToBlockPool+0x15e1 // shortened stack trace. all of these are from msvb7, msvb7.dll+0x46ce7 <- 0x2676a <- 0x2698e <- 0x38031 <- 0x2659f <- 0x26644 msvb7.dll+0x25f29 <- 0x2ac7a <- 0x27522 <- 0x274a0 <- 0x2b5ce <- 0x2b6e4 msvb7.dll+0x67d0a <- 0x68551 <- 0x6817b <- 0x681f0 <- 0x67c38 <- 0x65fa8 msvb7.dll+0x666c6 <- 0x6672c <- 0x6673d <- 0x6677c <- 0x667b4 <- 0x63c77 msvb7.dll+0x63e97 <- 0x42c3a <- 0x42bc1 <- 0x41bd7 kernel32.dll!GetModuleFileNameA+0x1b4 This is the list of stuff from "copy info" in help-about, shortened to a resonable length. Microsoft Visual Studio 2008 | Version 9.0.30729.1 SP Microsoft Visual Studio 2008 Professional Edition - ENU Service Pack 1 (KB945140) KB945140 Microsoft .NET Framework | Version 3.5 SP1 Microsoft Visual Basic 2008 Microsoft Visual C# 2008 Microsoft Visual F# for Visual Studio 2008 Microsoft Visual Studio 2008 Team Explorer | Version 9.0.30729.1 Microsoft Visual Studio 2008 Tools for Office Microsoft Visual Web Developer 2008 Hotfix for Microsoft Visual Studio 2008 Professional Edition - ENU KB944899, KB945282, KB946040, KB946308, KB946344, KB946581, KB947171 KB947173, KB947180, KB947540, KB947789, KB948127, KB946260, KB946458, KB948816 Microsoft Recipe Framework Package 8.0 Process Editor WIT Designer 1.4.0.0 Process Editor for Microsoft Visual Studio Team Foundation Server, Version 1.4.0.0 tangible T4 Editor 9.0 tangible T4 Text Template Editor - T4 Editor tangibleprojectsystem 1.0 Team Foundation Server Power Tools October 2008 SQL Prompt 4.0 (disabled)

    Read the article

  • FreeBSD performance tuning. Sysctls, loader.conf, kernel

    - by SaveTheRbtz
    I wanted to share knowledge of tuning FreeBSD via sysctl.conf/loader.conf/KENCONF. It was initially based on Igor Sysoev's (author of nginx) presentation about FreeBSD tuning up to 100,000-200,000 active connections. Tunings are for FreeBSD-CURRENT. Since 7.2 amd64 some of them are tuned well by default. Prior 7.0 some of them are boot only (set via /boot/loader.conf) or does not exist at all. sysctl.conf: # No zero mapping feature # May break wine # (There are also reports about broken samba3) #security.bsd.map_at_zero=0 # If you have really busy webserver with apache13 you may run out of processes #kern.maxproc=10000 # Same for servers with apache2 / Pound #kern.threads.max_threads_per_proc=4096 # Max. backlog size kern.ipc.somaxconn=4096 # Shared memory // 7.2+ can use shared memory > 2Gb kern.ipc.shmmax=2147483648 # Sockets kern.ipc.maxsockets=204800 # Can cause this on older kernels: # http://old.nabble.com/Significant-performance-regression-for-increased-maxsockbuf-on-8.0-RELEASE-tt26745981.html#a26745981 ) kern.ipc.maxsockbuf=10485760 # Mbuf 2k clusters (on amd64 7.2+ 25600 is default) # For such high value vm.kmem_size must be increased to 3G kern.ipc.nmbclusters=262144 # Jumbo pagesize(_SC_PAGESIZE) clusters # Used as general packet storage for jumbo frames # can be monitored via `netstat -m` #kern.ipc.nmbjumbop=262144 # Jumbo 9k/16k clusters # If you are using them #kern.ipc.nmbjumbo9=65536 #kern.ipc.nmbjumbo16=32768 # For lower latency you can decrease scheduler's maximum time slice # default: stathz/10 (~ 13) #kern.sched.slice=1 # Increase max command-line length showed in `ps` (e.g for Tomcat/Java) # Default is PAGE_SIZE / 16 or 256 on x86 # This avoids commands to be presented as [executable] in `ps` # For more info see: http://www.freebsd.org/cgi/query-pr.cgi?pr=120749 kern.ps_arg_cache_limit=4096 # Every socket is a file, so increase them kern.maxfiles=204800 kern.maxfilesperproc=200000 kern.maxvnodes=200000 # On some systems HPET is almost 2 times faster than default ACPI-fast # Useful on systems with lots of clock_gettime / gettimeofday calls # See http://old.nabble.com/ACPI-fast-default-timecounter,-but-HPET-83--faster-td23248172.html # After revision 222222 HPET became default: http://svnweb.freebsd.org/base?view=revision&revision=222222 kern.timecounter.hardware=HPET # Small receive space, only usable on http-server, on file server this # should be increased to 65535 or even more #net.inet.tcp.recvspace=8192 # This is useful on Fat-Long-Pipes #net.inet.tcp.recvbuf_max=10485760 #net.inet.tcp.recvbuf_inc=65535 # Small send space is useful for http servers that serve small files # Autotuned since 7.x net.inet.tcp.sendspace=16384 # This is useful on Fat-Long-Pipes #net.inet.tcp.sendbuf_max=10485760 #net.inet.tcp.sendbuf_inc=65535 # Turn off receive autotuning # You can play with it. #net.inet.tcp.recvbuf_auto=0 #net.inet.tcp.sendbuf_auto=0 # This should be enabled if you going to use big spaces (>64k) # Also timestamp field is useful when using syncookies net.inet.tcp.rfc1323=1 # Turn this off on high-speed, lossless connections (LAN 1Gbit+) # If you set it there is no need in TCP_NODELAY sockopt (see man tcp) net.inet.tcp.delayed_ack=0 # This feature is useful if you are serving data over modems, Gigabit Ethernet, # or even high speed WAN links (or any other link with a high bandwidth delay product), # especially if you are also using window scaling or have configured a large send window. # Automatically disables on small RTT ( http://www.freebsd.org/cgi/cvsweb.cgi/src/sys/netinet/tcp_subr.c?#rev1.237 ) # This sysctl was removed in 10-CURRENT: # See: http://www.mail-archive.com/[email protected]/msg06178.html #net.inet.tcp.inflight.enable=0 # TCP slowstart algorithm tunings # We assuming we have very fast clients #net.inet.tcp.slowstart_flightsize=100 #net.inet.tcp.local_slowstart_flightsize=100 # Disable randomizing of ports to avoid false RST # Before usage check SA here www.bsdcan.org/2006/papers/ImprovingTCPIP.pdf # (it's also says that port randomization auto-disables at some conn.rates, but I didn't checked it thou) #net.inet.ip.portrange.randomized=0 # Increase portrange # For outgoing connections only. Good for seed-boxes and ftp servers. net.inet.ip.portrange.first=1024 net.inet.ip.portrange.last=65535 # # stops route cache degregation during a high-bandwidth flood # http://www.freebsd.org/doc/en/books/handbook/securing-freebsd.html #net.inet.ip.rtexpire=2 net.inet.ip.rtminexpire=2 net.inet.ip.rtmaxcache=1024 # Security net.inet.ip.redirect=0 net.inet.ip.sourceroute=0 net.inet.ip.accept_sourceroute=0 net.inet.icmp.maskrepl=0 net.inet.icmp.log_redirect=0 net.inet.icmp.drop_redirect=1 net.inet.tcp.drop_synfin=1 # # There is also good example of sysctl.conf with comments: # http://www.thern.org/projects/sysctl.conf # # icmp may NOT rst, helpful for those pesky spoofed # icmp/udp floods that end up taking up your outgoing # bandwidth/ifqueue due to all that outgoing RST traffic. # #net.inet.tcp.icmp_may_rst=0 # Security net.inet.udp.blackhole=1 net.inet.tcp.blackhole=2 # IPv6 Security # For more info see http://www.fosslc.org/drupal/content/security-implications-ipv6 # Disable Node info replies # To see this vulnerability in action run `ping6 -a sglAac ::1` or `ping6 -w ::1` on unprotected node net.inet6.icmp6.nodeinfo=0 # Turn on IPv6 privacy extensions # For more info see proposal http://unix.derkeiler.com/Mailing-Lists/FreeBSD/net/2008-06/msg00103.html net.inet6.ip6.use_tempaddr=1 net.inet6.ip6.prefer_tempaddr=1 # Disable ICMP redirect net.inet6.icmp6.rediraccept=0 # Disable acceptation of RA and auto linklocal generation if you don't use them #net.inet6.ip6.accept_rtadv=0 #net.inet6.ip6.auto_linklocal=0 # Increases default TTL, sometimes useful # Default is 64 net.inet.ip.ttl=128 # Lessen max segment life to conserve resources # ACK waiting time in miliseconds # (default: 30000. RFC from 1979 recommends 120000) net.inet.tcp.msl=5000 # Max bumber of timewait sockets net.inet.tcp.maxtcptw=200000 # Don't use tw on local connections # As of 15 Apr 2009. Igor Sysoev says that nolocaltimewait has some buggy realization. # So disable it or now till get fixed #net.inet.tcp.nolocaltimewait=1 # FIN_WAIT_2 state fast recycle net.inet.tcp.fast_finwait2_recycle=1 # Time before tcp keepalive probe is sent # default is 2 hours (7200000) #net.inet.tcp.keepidle=60000 # Should be increased until net.inet.ip.intr_queue_drops is zero net.inet.ip.intr_queue_maxlen=4096 # Interrupt handling via multiple CPU, but with context switch. # You can play with it. Default is 1; #net.isr.direct=0 # This is for routers only #net.inet.ip.forwarding=1 #net.inet.ip.fastforwarding=1 # This speed ups dummynet when channel isn't saturated net.inet.ip.dummynet.io_fast=1 # Increase dummynet(4) hash #net.inet.ip.dummynet.hash_size=2048 #net.inet.ip.dummynet.max_chain_len # Should be increased when you have A LOT of files on server # (Increase until vfs.ufs.dirhash_mem becomes lower) vfs.ufs.dirhash_maxmem=67108864 # Note from commit http://svn.freebsd.org/base/head@211031 : # For systems with RAID volumes and/or virtualization envirnments, where # read performance is very important, increasing this sysctl tunable to 32 # or even more will demonstratively yield additional performance benefits. vfs.read_max=32 # Explicit Congestion Notification (see http://en.wikipedia.org/wiki/Explicit_Congestion_Notification) net.inet.tcp.ecn.enable=1 # Flowtable - flow caching mechanism # Useful for routers #net.inet.flowtable.enable=1 #net.inet.flowtable.nmbflows=65535 # Extreme polling tuning #kern.polling.burst_max=1000 #kern.polling.each_burst=1000 #kern.polling.reg_frac=100 #kern.polling.user_frac=1 #kern.polling.idle_poll=0 # IPFW dynamic rules and timeouts tuning # Increase dyn_buckets till net.inet.ip.fw.curr_dyn_buckets is lower net.inet.ip.fw.dyn_buckets=65536 net.inet.ip.fw.dyn_max=65536 net.inet.ip.fw.dyn_ack_lifetime=120 net.inet.ip.fw.dyn_syn_lifetime=10 net.inet.ip.fw.dyn_fin_lifetime=2 net.inet.ip.fw.dyn_short_lifetime=10 # Make packets pass firewall only once when using dummynet # i.e. packets going thru pipe are passing out from firewall with accept #net.inet.ip.fw.one_pass=1 # shm_use_phys Wires all shared pages, making them unswappable # Use this to lessen Virtual Memory Manager's work when using Shared Mem. # Useful for databases #kern.ipc.shm_use_phys=1 # ZFS # Enable prefetch. Useful for sequential load type i.e fileserver. # FreeBSD sets vfs.zfs.prefetch_disable to 1 on any i386 systems and # on any amd64 systems with less than 4GB of avaiable memory # For additional info check this nabble thread http://old.nabble.com/Samba-read-speed-performance-tuning-td27964534.html #vfs.zfs.prefetch_disable=0 # On highload servers you may notice following message in dmesg: # "Approaching the limit on PV entries, consider increasing either the # vm.pmap.shpgperproc or the vm.pmap.pv_entry_max tunable" vm.pmap.shpgperproc=2048 loader.conf: # Accept filters for data, http and DNS requests # Useful when your software uses select() instead of kevent/kqueue or when you under DDoS # DNS accf available on 8.0+ accf_data_load="YES" accf_http_load="YES" accf_dns_load="YES" # Async IO system calls aio_load="YES" # Linux specific devices in /dev # As for 8.1 it only /dev/full #lindev_load="YES" # Adds NCQ support in FreeBSD # WARNING! all ad[0-9]+ devices will be renamed to ada[0-9]+ # 8.0+ only #ahci_load="YES" #siis_load="YES" # FreeBSD 8.2+ # New Congestion Control for FreeBSD # http://caia.swin.edu.au/urp/newtcp/tools/cc_chd-readme-0.1.txt # http://www.ietf.org/proceedings/78/slides/iccrg-5.pdf # Initial merge commit message http://www.mail-archive.com/[email protected]/msg31410.html #cc_chd_load="YES" # Increase kernel memory size to 3G. # # Use ONLY if you have KVA_PAGES in kernel configuration, and you have more than 3G RAM # Otherwise panic will happen on next reboot! # # It's required for high buffer sizes: kern.ipc.nmbjumbop, kern.ipc.nmbclusters, etc # Useful on highload stateful firewalls, proxies or ZFS fileservers # (FreeBSD 7.2+ amd64 users: Check that current value is lower!) #vm.kmem_size="3G" # If your server has lots of swap (>4Gb) you should increase following value # according to http://lists.freebsd.org/pipermail/freebsd-hackers/2009-October/029616.html # Otherwise you'll be getting errors # "kernel: swap zone exhausted, increase kern.maxswzone" # kern.maxswzone="256M" # Older versions of FreeBSD can't tune maxfiles on the fly #kern.maxfiles="200000" # Useful for databases # Sets maximum data size to 1G # (FreeBSD 7.2+ amd64 users: Check that current value is lower!) #kern.maxdsiz="1G" # Maximum buffer size(vfs.maxbufspace) # You can check current one via vfs.bufspace # Should be lowered/upped depending on server's load-type # Usually decreased to preserve kmem # (default is 10% of mem) #kern.maxbcache="512M" # Sendfile buffers # For i386 only #kern.ipc.nsfbufs=10240 # FreeBSD 9+ # HPET "legacy route" support. It should allow HPET to work per-CPU # See http://www.mail-archive.com/[email protected]/msg03603.html #hint.atrtc.0.clock=0 #hint.attimer.0.clock=0 #hint.hpet.0.legacy_route=1 # syncache Hash table tuning net.inet.tcp.syncache.hashsize=1024 net.inet.tcp.syncache.bucketlimit=512 net.inet.tcp.syncache.cachelimit=65536 # Increased hostcache # Later host cache can be viewed via net.inet.tcp.hostcache.list hidden sysctl # Very useful for it's RTT RTTVAR # Must be power of two net.inet.tcp.hostcache.hashsize=65536 # hashsize * bucketlimit (which is 30 by default) # It allocates 255Mb (1966080*136) of RAM net.inet.tcp.hostcache.cachelimit=1966080 # TCP control-block Hash table tuning net.inet.tcp.tcbhashsize=4096 # Disable ipfw deny all # Should be uncommented when there is a chance that # kernel and ipfw binary may be out-of sync on next reboot #net.inet.ip.fw.default_to_accept=1 # # SIFTR (Statistical Information For TCP Research) is a kernel module that # logs a range of statistics on active TCP connections to a log file. # See prerelease notes http://groups.google.com/group/mailing.freebsd.current/browse_thread/thread/b4c18be6cdce76e4 # and man 4 sitfr #siftr_load="YES" # Enable superpages, for 7.2+ only # Also read http://lists.freebsd.org/pipermail/freebsd-hackers/2009-November/030094.html vm.pmap.pg_ps_enabled=1 # Usefull if you are using Intel-Gigabit NIC #hw.em.rxd=4096 #hw.em.txd=4096 #hw.em.rx_process_limit="-1" # Also if you have ALOT interrupts on NIC - play with following parameters # NOTE: You should set them for every NIC #dev.em.0.rx_int_delay: 250 #dev.em.0.tx_int_delay: 250 #dev.em.0.rx_abs_int_delay: 250 #dev.em.0.tx_abs_int_delay: 250 # There is also multithreaded version of em/igb drivers can be found here: # http://people.yandex-team.ru/~wawa/ # # for additional em monitoring and statistics use # sysctl dev.em.0.stats=1 ; dmesg # sysctl dev.em.0.debug=1 ; dmesg # Also after r209242 (-CURRENT) there is a separate sysctl for each stat variable; # Same tunings for igb #hw.igb.rxd=4096 #hw.igb.txd=4096 #hw.igb.rx_process_limit=100 # Some useful netisr tunables. See sysctl net.isr #net.isr.maxthreads=4 #net.isr.defaultqlimit=4096 #net.isr.maxqlimit: 10240 # Bind netisr threads to CPUs #net.isr.bindthreads=1 # # FreeBSD 9.x+ # Increase interface send queue length # See commit message http://svn.freebsd.org/viewvc/base?view=revision&revision=207554 #net.link.ifqmaxlen=1024 # Nicer boot logo =) loader_logo="beastie" And finally here is KERNCONF: # Just some of them, see also # cat /sys/{i386,amd64,}/conf/NOTES # This one useful only on i386 #options KVA_PAGES=512 # You can play with HZ in environments with high interrupt rate (default is 1000) # 100 is for my notebook to prolong it's battery life #options HZ=100 # Polling is goot on network loads with high packet rates and low-end NICs # NB! Do not enable it if you want more than one netisr thread #options DEVICE_POLLING # Eliminate datacopy on socket read-write # To take advantage with zero copy sockets you should have an MTU >= 4k # This req. is only for receiving data. # Read more in man zero_copy_sockets # Also this epic thread on kernel trap: # http://kerneltrap.org/node/6506 # Here Linus says that "anybody that does it that way (FreeBSD) is totally incompetent" #options ZERO_COPY_SOCKETS # Support TCP sign. Used for IPSec options TCP_SIGNATURE # There was stackoverflow found in KAME IPSec stack: # See http://secunia.com/advisories/43995/ # For quick workaround you can use `ipfw add deny proto ipcomp` options IPSEC # This ones can be loaded as modules. They described in loader.conf section #options ACCEPT_FILTER_DATA #options ACCEPT_FILTER_HTTP # Adding ipfw, also can be loaded as modules options IPFIREWALL # On 8.1+ you can disable verbose to see blocked packets on ipfw0 interface. # Also there is no point in compiling verbose into the kernel, because # now there is net.inet.ip.fw.verbose tunable. #options IPFIREWALL_VERBOSE #options IPFIREWALL_VERBOSE_LIMIT=10 options IPFIREWALL_FORWARD # Adding kernel NAT options IPFIREWALL_NAT options LIBALIAS # Traffic shaping options DUMMYNET # Divert, i.e. for userspace NAT options IPDIVERT # This is for OpenBSD's pf firewall device pf device pflog # pf's QoS - ALTQ options ALTQ options ALTQ_CBQ # Class Bases Queuing (CBQ) options ALTQ_RED # Random Early Detection (RED) options ALTQ_RIO # RED In/Out options ALTQ_HFSC # Hierarchical Packet Scheduler (HFSC) options ALTQ_PRIQ # Priority Queuing (PRIQ) options ALTQ_NOPCC # Required for SMP build # Pretty console # Manual can be found here http://forums.freebsd.org/showthread.php?t=6134 #options VESA #options SC_PIXEL_MODE # Disable reboot on Ctrl Alt Del #options SC_DISABLE_REBOOT # Change normal|kernel messages color options SC_NORM_ATTR=(FG_GREEN|BG_BLACK) options SC_KERNEL_CONS_ATTR=(FG_YELLOW|BG_BLACK) # More scroll space options SC_HISTORY_SIZE=8192 # Adding hardware crypto device device crypto device cryptodev # Useful network interfaces device vlan device tap #Virtual Ethernet driver device gre #IP over IP tunneling device if_bridge #Bridge interface device pfsync #synchronization interface for PF device carp #Common Address Redundancy Protocol device enc #IPsec interface device lagg #Link aggregation interface device stf #IPv4-IPv6 port # Also for my notebook, but may be used with Opteron device amdtemp # Same for Intel processors device coretemp # man 4 cpuctl device cpuctl # CPU control pseudo-device # Support for ECMP. More than one route for destination # Works even with default route so one can use it as LB for two ISP # For now code is unstable and panics (panic: rtfree 2) on route deletions. #options RADIX_MPATH # Multicast routing #options MROUTING #options PIM # Debug & DTrace options KDB # Kernel debugger related code options KDB_TRACE # Print a stack trace for a panic options KDTRACE_FRAME # amd64-only(?) options KDTRACE_HOOKS # all architectures - enable general DTrace hooks #options DDB #options DDB_CTF # all architectures - kernel ELF linker loads CTF data # Adaptive spining in lockmgr (8.x+) # See http://www.mail-archive.com/[email protected]/msg10782.html options ADAPTIVE_LOCKMGRS # UTF-8 in console (8.x+) #options TEKEN_UTF8 # FreeBSD 8.1+ # Deadlock resolver thread # For additional information see http://www.mail-archive.com/[email protected]/msg18124.html # (FYI: "resolution" is panic so use with caution) #options DEADLKRES # Increase maximum size of Raw I/O and sendfile(2) readahead #options MAXPHYS=(1024*1024) #options MAXBSIZE=(1024*1024) # For scheduler debug enable following option. # Debug will be available via `kern.sched.stats` sysctl # For more information see http://svnweb.freebsd.org/base/head/sys/conf/NOTES?view=markup #options SCHED_STATS If you are tuning network for maximum performance you may wish to play with ifconfig options like: # You can list all capabilities via `ifconfig -m` ifconfig [-]rxcsum [-]txcsum [-]tso [-]lro mtu In case you've enabled DDB in kernel config, you should edit your /etc/ddb.conf and add something like this to enable automatic reboot (and textdump as bonus): script kdb.enter.panic=textdump set; capture on; show pcpu; bt; ps; alltrace; capture off; call doadump; reset script kdb.enter.default=textdump set; capture on; bt; ps; capture off; call doadump; reset And do not forget to add ddb_enable="YES" to /etc/rc.conf Since FreeBSD 9 you can select to enable/disable flowcontrol on your NIC: # See http://en.wikipedia.org/wiki/Ethernet_flow_control and # http://www.mail-archive.com/[email protected]/msg07927.html for additional info ifconfig bge0 media auto mediaopt flowcontrol PS. Also most of FreeBSD's limits can be monitored by # vmstat -z and # limits PPS. variety of network counters can be monitored via # netstat -s In FreeBSD-9 netstat's -Q option appeared, try following command to display netisr stats # netstat -Q PPPS. also see # man 7 tuning PPPPS. I wanted to thank FreeBSD community, especially author of nginx - Igor Sysoev, nginx-ru@ and FreeBSD-performance@ mailing lists for providing useful information about FreeBSD tuning. FreeBSD WIP * Whats cooking for FreeBSD 7? * Whats cooking for FreeBSD 8? * Whats cooking for FreeBSD 9? So here is the question: What tunings are you using on yours FreeBSD servers? You can also post your /etc/sysctl.conf, /boot/loader.conf, kernel options, etc with description of its' meaning (do not copy-paste from sysctl -d). Don't forget to specify server type (web, smb, gateway, etc) Let's share experience!

    Read the article

  • How to setup linux permissions for the WWW folder?

    - by Xeoncross
    Updated Summery The /var/www directory is owned by root:root which means that no one can use it and it's entirely useless. Since we all want a web server that actually works (and no-one should be logging in as "root"), then we need to fix this. Only two entities need access. PHP/Perl/Ruby/Python all need access to the folders and files since they create many of them (i.e. /uploads/). These scripting languages should be running under nginx or apache (or even some other thing like FastCGI for PHP). The developers How do they get access? I know that someone, somewhere has done this before. With however-many billions of websites out there you would think that there would be more information on this topic. I know that 777 is full read/write/execute permission for owner/group/other. So this doesn't seem to be needed as it leaves random users full permissions. What permissions are need to be used on /var/www so that... Source control like git or svn Users in a group like "websites" (or even added to "www-data") Servers like apache or lighthttpd And PHP/Perl/Ruby can all read, create, and run files (and directories) there? If I'm correct, Ruby and PHP scripts are not "executed" directly - but passed to an interpreter. So there is no need for execute permission on files in /var/www...? Therefore, it seems like the correct permission would be chmod -R 1660 which would make all files shareable by these four entities all files non-executable by mistake block everyone else from the directory entirely set the permission mode to "sticky" for all future files Is this correct? Update: I just realized that files and directories might need different permissions - I was talking about files above so i'm not sure what the directory permissions would need to be. Update 2: The folder structure of /var/www changes drastically as one of the four entities above are always adding (and sometimes removing) folders and sub folders many levels deep. They also create and remove files that the other 3 entities might need read/write access to. Therefore, the permissions need to do the four things above for both files and directories. Since non of them should need execute permission (see question about ruby/php above) I would assume that rw-rw-r-- permission would be all that is needed and completely safe since these four entities are run by trusted personal (see #2) and all other users on the system only have read access. Update 3: This is for personal development machines and private company servers. No random "web customers" like a shared host. Update 4: This article by slicehost seems to be the best at explaining what is needed to setup permissions for your www folder. However, I'm not sure what user or group apache/nginx with PHP OR svn/git run as and how to change them. Update 5: I have (I think) finally found a way to get this all to work (answer below). However, I don't know if this is the correct and SECURE way to do this. Therefore I have started a bounty. The person that has the best method of securing and managing the www directory wins.

    Read the article

  • Determining cause of high NFS/IO utilization without iotop

    - by Matt
    I have a server that is doing an NFSv4 export for user's home directories. There are roughly 25 users (mostly developers/analysts) and about 40 servers mounting the home directory export. Performance is miserable, with users often seeing multi-second lags for simple commands (like ls, or writing a small text file). Sometimes the home directory mount completely hangs for minutes, with users getting "permission denied" errors. The hardware is a Dell R510 with dual E5620 CPUs and 8 GB RAM. There are eight 15k 2.5” 600 GB drives (Seagate ST3600057SS) configured in hardware RAID-6 with a single hot spare. RAID controller is a Dell PERC H700 w/512MB cache (Linux sees this as a LSI MegaSAS 9260). OS is CentOS 5.6, home directory partition is ext3, with options “rw,data=journal,usrquota”. I have the HW RAID configured to present two virtual disks to the OS: /dev/sda for the OS (boot, root and swap partitions), and /dev/sdb for the home directories. What I find curious, and suspicious, is that the sda device often has very high utilization, even though it only contains the OS. I would expect this virtual drive to be idle almost all the time. The system is not swapping, according to "free" and "vmstat". Why would there be major load on this device? Here is a 30-second snapshot from iostat: Time: 09:37:28 AM Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 44.09 0.03 107.76 0.13 607.40 11.27 0.89 8.27 7.27 78.35 sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda2 0.00 44.09 0.03 107.76 0.13 607.40 11.27 0.89 8.27 7.27 78.35 sdb 0.00 2616.53 0.67 157.88 2.80 11098.83 140.04 8.57 54.08 4.21 66.68 sdb1 0.00 2616.53 0.67 157.88 2.80 11098.83 140.04 8.57 54.08 4.21 66.68 dm-0 0.00 0.00 0.03 151.82 0.13 607.26 8.00 1.25 8.23 5.16 78.35 dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-2 0.00 0.00 0.67 2774.84 2.80 11099.37 8.00 474.30 170.89 0.24 66.84 dm-3 0.00 0.00 0.67 2774.84 2.80 11099.37 8.00 474.30 170.89 0.24 66.84 Looks like iotop is the ideal tool to use to sniff out these kinds of issues. But I'm on CentOS 5.6, which doesn't have a new enough kernel to support that program. I looked at Determining which process is causing heavy disk I/O?, and besides iotop, one of the suggestions said to do a "echo 1 /proc/sys/vm/block_dump". I did that (after directing kernel messages to tempfs). In about 13 minutes I had about 700k reads or writes, roughly half from kjournald and the other half from nfsd: # egrep " kernel: .*(READ|WRITE)" messages | wc -l 768439 # egrep " kernel: kjournald.*(READ|WRITE)" messages | wc -l 403615 # egrep " kernel: nfsd.*(READ|WRITE)" messages | wc -l 314028 For what it's worth, for the last hour, utilization has constantly been over 90% for the home directory drive. My 30-second iostat keeps showing output like this: Time: 09:36:30 PM Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 6.46 0.20 11.33 0.80 71.71 12.58 0.24 20.53 14.37 16.56 sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda2 0.00 6.46 0.20 11.33 0.80 71.71 12.58 0.24 20.53 14.37 16.56 sdb 137.29 7.00 549.92 3.80 22817.19 43.19 82.57 3.02 5.45 1.74 96.32 sdb1 137.29 7.00 549.92 3.80 22817.19 43.19 82.57 3.02 5.45 1.74 96.32 dm-0 0.00 0.00 0.20 17.76 0.80 71.04 8.00 0.38 21.21 9.22 16.57 dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-2 0.00 0.00 687.47 10.80 22817.19 43.19 65.48 4.62 6.61 1.43 99.81 dm-3 0.00 0.00 687.47 10.80 22817.19 43.19 65.48 4.62 6.61 1.43 99.82

    Read the article

  • Broken cups installation on a ubuntu server 64

    - by user67046
    Hi, I am having trouble with an cups installation. It seems to be in a broken state. When i try to reinstall it it stalls, the same if i try to remove it completely. I am running the server version 64 bit of Ubuntu 10.10 with kernel Linux version 2.6.35-22-server. When i try to start the cups daemon with the following command sudo service cups start It just stays there and nothing happens. I have tried to remove it, to be able to reinstall it, with the following command sudo apt-get purge cups It finally stalls with the following message Removing cups ... After that nothing happens. The process tree for the apt-get command looks like this. 1404 1404 1404 ? 00:00:00 sshd 26495 26495 26495 ? 00:00:00 sshd 26581 26495 26495 ? 00:00:00 sshd 26582 26582 26582 pts/4 00:00:00 bash 27158 27158 26582 pts/4 00:00:00 apt-get 27172 27172 27172 pts/2 00:00:00 dpkg 27176 27172 27172 pts/2 00:00:00 cups.prerm 27178 27172 27172 pts/2 00:00:00 stop I have tried to leave the process running for a while to see if i get any error messages but without success. To get out of it I have to kill the processes. sudo dpkg --configure cups dpkg: error processing cups (--configure): package cups is already installed and configured Errors were encountered while processing: cups sudo dpkg --status cups Package: cups Status: purge ok installed Priority: optional Section: net Installed-Size: 8292 Maintainer: Ubuntu Developers <[email protected]> Architecture: amd64 Version: 1.4.4-6ubuntu2.3 Replaces: cupsddk-drivers (<< 1.4.0) Provides: cupsddk-drivers Depends: libavahi-client3 (>= 0.6.16), libavahi-common3 (>= 0.6.16), libc6 (>= 2.7), libcups2 (>= 1.4.4-3~), libcupscgi1 (>= 1.4.2), libcupsdriver1 (>= 1.4.0), libcupsimage2 (>= 1.4.0), libcupsmime1 (>= 1.4.0), libcupsppdc1 (>= 1.4.0), libdbus-1-3 (>= 1.0.2), libgcc1 (>= 1:4.1.1), libgnutls26 (>= 2.7.14-0), libgssapi-krb5-2 (>= 1.8+dfsg), libijs-0.35, libkrb5-3 (>= 1.6.dfsg.2), libldap-2.4-2 (>= 2.4.7), libpam0g (>= 0.99.7.1), libpaper1, libpoppler7, libslp1, libstdc++6 (>= 4.1.1), libusb-0.1-4 (>= 2:0.1.12), zlib1g (>= 1:1.1.4), debconf (>= 1.2.9) | debconf-2.0, upstart-job, poppler-utils (>= 0.12), procps, ghostscript, lsb-base (>= 3), cups-common (>= 1.4.4), cups-client (>= 1.4.4-6ubuntu2.3), ssl-cert (>= 1.0.11), adduser, bc, ttf-freefont, cups-ppdc Recommends: foomatic-filters (>= 4.0), cups-driver-gutenprint, ghostscript-cups Suggests: cups-bsd, foomatic-db-compressed-ppds | foomatic-db, hplip, xpdf-korean | xpdf-japanese | xpdf-chinese-traditional | xpdf-chinese-simplified, cups-pdf, smbclient (>= 3.0.9), udev Breaks: foomatic-filters (<< 4.0) Conflicts: cupsddk-drivers (<< 1.4.0) Conffiles: /etc/fonts/conf.d/99pdftoopvp.conf a5221cfad70a981c80864229ef56586d /etc/logrotate.d/cups 5bb41fa9900f0d1c565954405a2bd7c4 /etc/default/cups 2b436fbb1a32b82b6aba45a76a1d7e40 /etc/pam.d/cups ff2488324854f7b1e892bb0df062d5f0 /etc/init/cups.conf 1a3cd022e8474e3d2b44640f33ce68e3 /etc/ufw/applications.d/cups 29e98a6d850da251e180c3d68dec2bd3 /etc/apparmor.d/usr.sbin.cupsd 60c4b26bfd5c033baa3dd48a3b2e9911 /etc/cups/cupsd.conf e2c7ec15835ea0939e5e86f7c6efcc03 /etc/cups/snmp.conf 2326a8af1e112676d55245bc5eb459ca /etc/cups/cupsd.conf.default a68d54d76021e857dd1d64edf57d36c5 Description: Common UNIX Printing System(tm) - server The Common UNIX Printing System (or CUPS(tm)) is a printing system and general replacement for lpd and the like. It supports the Internet Printing Protocol (IPP), and has its own filtering driver model for handling various document types. . This package provides the CUPS scheduler/daemon and related files. Original-Maintainer: Debian CUPS Maintainers <[email protected]> Would be greatful if someone could provide some help on how to solve this issue.

    Read the article

  • Coda 2 and SCP uploading files with the wrong permission

    - by Tom Black
    Currently I have a basic Ubuntu server running a website. The website is for a few students learning HTML/PHP and each student has their own account with a symbolic link to the shared website folder. Since the students are working on the website together, each user needs to be able to modify all the files (index.html for example). So I created a Webdev group containing all of the students with the default umask of 0002 set in their .bashrc (This allows newly created files to be 774). The shared folder is owned by the group Webdev with a chmod g+s so that new files/folders also belong to the group Webdev. The problem is that the students are using an IDE (Coda 2) and when they create a new file or folder using the IDE the file has the permissions of 644 on the server (not group writable). However when I make a new file through connecting with Cyberduck (SFTP client) the file permissions are 664 (as they should be). So I don't understand why Coda would be any different. However, after some trial and error I believe that Coda is first creating the file on local disk and then uploading that file to the server. On a mac by default a newly created file is 644. When the client uploads a file that's already 644 it stays 644 on the server side (umask is kind of useless in this situation). I've also tried creating ACL permissions for that folder but an uploaded file from my mac via SCP doesn't get the default ACL permissions. In Coda there is an option to change file permissions on a transfer. However this option seems to apply a chmod to all files being uploaded or saved. When one of students is modifying a file created by someone else when they try to upload the file or save it Coda tries to also do a chmod but fails because that user isn't the owner of the file. My current solution is using bindfs... I mount the shared web folder and bindfs sets permissions and group ownership of newly created files. However, bindfs seems to be a bit slow and I'm sure there is a better solution. Even if the students ditched Coda 2 and used Mac vim with scp the newly created files on the server would behave the same (644) which is default on the mac. Other options... 1) Either I teach the students to use (ssh/chmod) with their IDE to change their own file permissions when uploading. 2) I make all the students' Macs have the default umask of 0002 which would upload files with the right permissions. 3) Write a corn script to fix the file permissions every 5 to 15 minutes... (This option I think is the worst if students are working together at the same time). Is there any way that I could make all files that are uploaded via SCP have the default file permissions of 664 even though the uploaded file has a lower permission? (After hours of searching I don't think this is possible) I guess a corn script is my best option for novice users. How do web developers work together on larger sites? similar to this: http://serverfault.com/questions/283492/how-to-specify-file-permission-when-putting-a-file-using-openssh-sftp-command Also similar: http://serverfault.com/questions/395418/managing-linux-directory-permissions-sftp

    Read the article

  • Why doesn't this for loop work?

    - by evilsoup
    This is on Ubuntu 12.04 I'm trying to figure out how to get ffmpeg to do a batch conversion of FLACs to MP3, recursively. If I cd into a directory and use for f in *.flac; do ffmpeg -i "$f" -c:a libmp3lame -q:a 2 "${f/%flac/mp3}"; done that works perfectly fine. However, when I try this, it doesn't work: for f in "$(find . -type f -name *.flac)"; do ffmpeg -i "$f" -c:a libmp3lame -q:a 2 "${f/%flac/mp3}"; done It doesn't even throw up any useful errors (but here is the output anyway, no need to complain): evilsoup@enchantment:~/Music/Jean Sibelius$ for f in "$(find . -type f -name *.flac)"; do ffmpeg -i "$f" -c:a libmp3lame -q:a 2 "${f/%flac/mp3}"; done ffmpeg version git-2012-12-18-b7e085a Copyright (c) 2000-2012 the FFmpeg developers built on Dec 18 2012 19:23:11 with gcc 4.6 (Ubuntu/Linaro 4.6.3-1ubuntu5) configuration: --enable-gpl --enable-libfaac --enable-libfdk-aac --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-librtmp --enable-libtheora --enable-libvorbis --enable-libvpx --enable-x11grab --enable-libx264 --enable-nonfree --enable-version3 libavutil 52. 12.100 / 52. 12.100 libavcodec 54. 80.100 / 54. 80.100 libavformat 54. 49.102 / 54. 49.102 libavdevice 54. 3.102 / 54. 3.102 libavfilter 3. 28.100 / 3. 28.100 libswscale 2. 1.103 / 2. 1.103 libswresample 0. 17.102 / 0. 17.102 libpostproc 52. 2.100 / 52. 2.100 ./Symphonies 1, 2, 3 & 5 (Oslo Philharmonic Orchestra Conducted by Mariss Jansons) Disc 1/02. Symphony No.1.flac ./Symphonies 1, 2, 3 & 5 (Oslo Philharmonic Orchestra Conducted by Mariss Jansons) Disc 1/03. Symphony No.1.flac ./Symphonies 1, 2, 3 & 5 (Oslo Philharmonic Orchestra Conducted by Mariss Jansons) Disc 1/stripped2.flac ./Symphonies 1, 2, 3 & 5 (Oslo Philharmonic Orchestra Conducted by Mariss Jansons) Disc 1/05. Symphony No.1.flac ./Symphonies 1, 2, 3 & 5 (Oslo Philharmonic Orchestra Conducted by Mariss Jansons) Disc 1/stripped3.flac ./Symphonies 1, 2, 3 & 5 (Oslo Philharmonic Orchestra Conducted by Mariss Jansons) Disc 1/09. Andante festivo.flac ./Symphonies 1, 2, 3 & 5 (Oslo Philharmonic Orchestra Conducted by Mariss Jansons) Disc 1/08. Symphony No.3.flac ./Symphonies 1, 2, 3 & 5 (Oslo Philharmonic Orchestra Conducted by Mariss Jansons) Disc 1/01. Finlandia.flac ./Symphonies 1, 2, 3 & 5 (Oslo Philharmonic Orchestra Conducted by Mariss Jansons) Disc 1/07. Symphony No.3.flac ./Symphonies 1, 2, 3 & 5 I've tested the find command on its own, and it works as expected, so the problem has to be something to do with the interaction between find and for. I'm aware that I could do something with find's -exec option, but I can't find any way to do string substitution as I can with a bash for loop, and I'd rather not have a bunch of file.flac.mp3s to deal with, even if they could be fixed with a simple rename.

    Read the article

  • Crash in audio resampler with some audio rates - FFMPEG PHP ( Solved! )

    - by Olaf Erlandsen
    i have a problem with this command( FFMPEG PHP ): Command: ffmpeg -i 62f76f050494f0ed6a5997967c00c0c0.wmv -ss 0 -t 99 -y -ar 44100 -async 44100 -r 29.970 -ac 2 -qscale 5 -f flv 62f76f050494f0ed6a5997967c00c0c0.flv Output: FFmpeg version 0.6.5, Copyright (c) 2000-2010 the FFmpeg developers built on Jan 29 2012 17:52:15 with gcc 4.4.5 20110214 (Red Hat 4.4.5-6) configuration: --prefix=/usr --libdir=/usr/lib64 --shlibdir=/usr/lib64 --mandir=/usr/share/man --incdir=/usr/include --disable-avisynth --extra-cflags='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -fPIC' --enable-avfilter --enable-avfilter-lavf --enable-libdc1394 --enable-libdirac --enable-libfaac --enable-libfaad --enable-libfaadbin --enable-libgsm --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-librtmp --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libx264 --enable-gpl --enable-nonfree --enable-postproc --enable-pthreads --enable-shared --enable-swscale --enable-vdpau --enable-version3 --enable-x11grab libavutil 50.15. 1 / 50.15. 1 libavcodec 52.72. 2 / 52.72. 2 libavformat 52.64. 2 / 52.64. 2 libavdevice 52. 2. 0 / 52. 2. 0 libavfilter 1.19. 0 / 1.19. 0 libswscale 0.11. 0 / 0.11. 0 libpostproc 51. 2. 0 / 51. 2. 0 [asf @ 0xe81670]max_analyze_duration reached Input #0, asf, from '/var/www/resources/tmp/62f76f050494f0ed6a5997967c00c0c0.wmv': Metadata: WMFSDKVersion : 12.0.7601.17514 WMFSDKNeeded : 0.0.0.0000 IsVBR : 0 Duration: 00:00:50.87, bitrate: 2467 kb/s Stream #0.0: Audio: wmapro, 44100 Hz, stereo, flt, 256 kb/s Stream #0.1: Video: vc1, yuv420p, 950x460 [PAR 1:1 DAR 95:46], 25 fps, 25 tbr, 1k tbn, 25 tbc Output #0, flv, to '/var/www/resources/media/62f76f050494f0ed6a5997967c00c0c0.flv': Metadata: encoder : Lavf52.64.2 Stream #0.0: Video: flv, yuv420p, 950x460 [PAR 1:1 DAR 95:46], q=2-31, 200 kb/s, 1k tbn, 29.97 tbc Stream #0.1: Audio: libmp3lame, 11025 Hz, stereo, s16, 64 kb/s Stream mapping: Stream #0.1 -> #0.0 Stream #0.0 -> #0.1 Press [q] to stop encoding frame= 72 fps= 0 q=5.0 size= 0kB time=10.91 bitrate= 0.0kbits/s Multiple frames in a packet from stream 0 Warning, using s16 intermediate sample format for resampling frame= 141 fps=139 q=5.0 size= 103kB time=8.15 bitrate= 103.2kbits/s frame= 220 fps=144 q=5.0 size= 875kB time=10.92 bitrate= 656.6kbits/s frame= 290 fps=143 q=5.0 size= 1525kB time=13.74 bitrate= 909.1kbits/s frame= 356 fps=141 q=5.0 size= 2153kB time=15.99 bitrate=1103.1kbits/s frame= 427 fps=141 q=5.0 size= 2847kB time=18.70 bitrate=1247.0kbits/s frame= 497 fps=141 q=5.0 size= 3771kB time=21.16 bitrate=1460.0kbits/s frame= 575 fps=142 q=5.0 size= 4695kB time=24.61 bitrate=1563.0kbits/s frame= 639 fps=141 q=5.0 size= 5301kB time=26.80 bitrate=1620.2kbits/s frame= 703 fps=139 q=5.0 size= 5829kB time=29.36 bitrate=1626.2kbits/s frame= 774 fps=139 q=5.0 size= 6659kB time=32.39 bitrate=1684.0kbits/s frame= 842 fps=139 q=5.0 size= 7915kB time=35.27 bitrate=1838.6kbits/s frame= 911 fps=139 q=5.0 size= 9011kB time=37.98 bitrate=1943.4kbits/s frame= 975 fps=138 q=5.0 size= 9788kB time=40.59 bitrate=1975.3kbits/s frame= 1041 fps=138 q=5.0 size= 10904kB time=43.83 bitrate=2037.9kbits/s frame= 1115 fps=138 q=5.0 size= 11795kB time=46.24 bitrate=2089.8kbits/s frame= 1183 fps=138 q=5.0 size= 12678kB time=48.74 bitrate=2130.7kbits/s frame= 1247 fps=137 q=5.0 size= 13964kB time=51.36 bitrate=2227.5kbits/s frame= 1271 fps=136 q=5.0 Lsize= 15865kB time=58.86 bitrate=2208.1kbits/s video:15366kB audio:462kB global headers:0kB muxing overhead 0.238956% Problem: Warning, using s16 intermediate sample format for resampling I've also tried changing the parameter From -ar 44100 to -ar 11025 Thanks! Solution: Read this link: http://en.wikipedia.org/wiki/MP3#Bit_rate

    Read the article

  • How to setup linux permissions the WWW folder?

    - by Xeoncross
    Updated Summery The /var/www directory is owned by root:root which means that no one can use it and it's entirely useless. Since we all want a web server that actually works (and no-one should be logging in as "root"), then we need to fix this. Only two entities need access. PHP/Perl/Ruby/Python all need access to the folders and files since they create many of them (i.e. /uploads/). These scripting languages should be running under nginx or apache (or even some other thing like FastCGI for PHP). The developers How do they get access? I know that someone, somewhere has done this before. With however-many billions of websites out there you would think that there would be more information on this topic. I know that 777 is full read/write/execute permission for owner/group/other. So this doesn't seem to be needed as it leaves random users full permissions. What permissions are need to be used on /var/www so that... Source control like git or svn Users in a group like "websites" (or even added to "www-data") Servers like apache or lighthttpd And PHP/Perl/Ruby can all read, create, and run files (and directories) there? If I'm correct, Ruby and PHP scripts are not "executed" directly - but passed to an interpreter. So there is no need for execute permission on files in /var/www...? Therefore, it seems like the correct permission would be chmod -R 1660 which would make all files shareable by these four entities all files non-executable by mistake block everyone else from the directory entirely set the permission mode to "sticky" for all future files Is this correct? Update: I just realized that files and directories might need different permissions - I was talking about files above so i'm not sure what the directory permissions would need to be. Update 2: The folder structure of /var/www changes drastically as one of the four entities above are always adding (and sometimes removing) folders and sub folders many levels deep. They also create and remove files that the other 3 entities might need read/write access to. Therefore, the permissions need to do the four things above for both files and directories. Since non of them should need execute permission (see question about ruby/php above) I would assume that rw-rw-r-- permission would be all that is needed and completely safe since these four entities are run by trusted personal (see #2) and all other users on the system only have read access. Update 3: This is for personal development machines and private company servers. No random "web customers" like a shared host. Update 4: This article by slicehost seems to be the best at explaining what is needed to setup permissions for your www folder. However, I'm not sure what user or group apache/nginx with PHP OR svn/git run as and how to change them. Update 5: I have (I think) finally found a way to get this all to work (answer below). However, I don't know if this is the correct and SECURE way to do this. Therefore I have started a bounty. The person that has the best method of securing and managing the www directory wins.

    Read the article

  • Reuse remote ssh connections and reduce command/session logging verbosity?

    - by ewwhite
    I have a number of systems that rely on application-level mirroring to a secondary server. The secondary server pulls data by means of a series of remote SSH commands executed on the primary. The application is a bit of a black box, and I may not be able to make modifications to the scripts that are used. My issue is that the logging in /var/log/secure is absolutely flooded with requests from the service user, admin. These commands occur many times per second and have a corresponding impact on logs. They rely on passphrase-less key exchange. The OS involved is EL5 and EL6. Example below. Is there any way to reduce the amount of logging from these actions. (By user? By source?) Is there a cleaner way for the developers to perform these ssh executions without spawning so many sessions? Seems inefficient. Can I reuse the existing connections? Example log output: Jul 24 19:08:54 Cantaloupe sshd[46367]: pam_unix(sshd:session): session closed for user admin Jul 24 19:08:54 Cantaloupe sshd[46446]: Accepted publickey for admin from 172.30.27.32 port 33526 ssh2 Jul 24 19:08:54 Cantaloupe sshd[46446]: pam_unix(sshd:session): session opened for user admin by (uid=0) Jul 24 19:08:54 Cantaloupe sshd[46446]: pam_unix(sshd:session): session closed for user admin Jul 24 19:08:54 Cantaloupe sshd[46475]: Accepted publickey for admin from 172.30.27.32 port 33527 ssh2 Jul 24 19:08:54 Cantaloupe sshd[46475]: pam_unix(sshd:session): session opened for user admin by (uid=0) Jul 24 19:08:54 Cantaloupe sshd[46475]: pam_unix(sshd:session): session closed for user admin Jul 24 19:08:54 Cantaloupe sshd[46504]: Accepted publickey for admin from 172.30.27.32 port 33528 ssh2 Jul 24 19:08:54 Cantaloupe sshd[46504]: pam_unix(sshd:session): session opened for user admin by (uid=0) Jul 24 19:08:54 Cantaloupe sshd[46504]: pam_unix(sshd:session): session closed for user admin Jul 24 19:08:54 Cantaloupe sshd[46583]: Accepted publickey for admin from 172.30.27.32 port 33529 ssh2 Jul 24 19:08:54 Cantaloupe sshd[46583]: pam_unix(sshd:session): session opened for user admin by (uid=0) Jul 24 19:08:54 Cantaloupe sshd[46583]: pam_unix(sshd:session): session closed for user admin Jul 24 19:08:54 Cantaloupe sshd[46612]: Accepted publickey for admin from 172.30.27.32 port 33530 ssh2 Jul 24 19:08:54 Cantaloupe sshd[46612]: pam_unix(sshd:session): session opened for user admin by (uid=0) Jul 24 19:08:54 Cantaloupe sshd[46612]: pam_unix(sshd:session): session closed for user admin Jul 24 19:08:55 Cantaloupe sshd[46641]: Accepted publickey for admin from 172.30.27.32 port 33531 ssh2 Jul 24 19:08:55 Cantaloupe sshd[46641]: pam_unix(sshd:session): session opened for user admin by (uid=0) Jul 24 19:08:55 Cantaloupe sshd[46641]: pam_unix(sshd:session): session closed for user admin Jul 24 19:08:55 Cantaloupe sshd[46720]: Accepted publickey for admin from 172.30.27.32 port 33532 ssh2 Jul 24 19:08:55 Cantaloupe sshd[46720]: pam_unix(sshd:session): session opened for user admin by (uid=0) Jul 24 19:08:55 Cantaloupe sshd[46720]: pam_unix(sshd:session): session closed for user admin Jul 24 19:08:55 Cantaloupe sshd[46749]: Accepted publickey for admin from 172.30.27.32 port 33533 ssh2 Jul 24 19:08:55 Cantaloupe sshd[46749]: pam_unix(sshd:session): session opened for user admin by (uid=0) Jul 24 19:08:55 Cantaloupe sshd[46749]: pam_unix(sshd:session): session closed for user admin Jul 24 19:08:55 Cantaloupe sshd[46778]: Accepted publickey for admin from 172.30.27.32 port 33534 ssh2 Jul 24 19:08:55 Cantaloupe sshd[46778]: pam_unix(sshd:session): session opened for user admin by (uid=0) Jul 24 19:08:55 Cantaloupe sshd[46778]: pam_unix(sshd:session): session closed for user admin Jul 24 19:08:55 Cantaloupe sshd[46857]: Accepted publickey for admin from 172.30.27.32 port 33535 ssh2

    Read the article

  • Accessing resources on localhost using domain credentials

    - by jas
    I'm trying to set up Team Foundation Server 2010, Sharepoint Server 2010 and Report Server 2008R2. I apologize for how long my question/problem is but I'm really lost on where to even look so am being as descriptive as possible in hopes that I'm making sense. The goal: Since developers can be inside or outside the firewall there needs to be a single http point of entry to TFS that works regardless of which side of the firewall you are and needs to work with external access to SharePoint and Report Server. Meaning we have it set up in DNS so buildserver.mydomain.com: points to the build service box which contains all of the services listed at the top of this post and specific services are defined/located by the port number. This is working great on every machine inside and out except for from the build server itself. All services must be able to work using external URLs. If I use http:// buildserver.mydomain.com:4800/tfs (the external URL) from my notebook which is behind the firewall I'm able to login with my domain credentials as expected. If the other developer points to the same URL from their home which isn't on the domain they are also able to login using their domain credentials. However if I am directly on buildserver and call SharePoint, TFS or Reporting Server from (i.e. http:// buildserver.mydomain.com:4800) itself using the external URL, I am prompted for a username and password. Entering my domain credentials results in another prompt to enter my credentials again. It will prompt three times regardless of which credentials are used (I have rights as a domain admin) and then after the third prompt directs me to a blank white page as though access was denied. There are no errors displayed on the page and nothing ends up in the event viewer. From buildserver if i use just the host name (the internal URL), then I'm prompted a single time for credentials and it works. i.e. http:// buildserver:4800/tfs works from the server itself. The behavior is identical for any service requiring authentication. Meaning from the box itself Sharepoint Central Admin, SharePoint WebApp, TFS, TFS Web Access, Report Server and Report Manager all fail using the external URL but will succeed if called using the interal URL. So the problem comes into play when configuring all of the services to work together. The only way to configure TFS is locally from the server which means I must point to the internal reporting server url (http:// buildserver:4800/reports and reportServer respectively instead of http:// buildserver.domainname.com:4800 like they need to be) since external URLs aren't working from itself. If I configure TFS to use the internal URL for Report Server then creating team projects or working in the SharePoint site for the team project fails for anyone not inside the domain since their machines have no idea who http:// buildserver:/reports even is or how to resolve them. I have configured Sharepoint with Alternate Access Mappings as well as set up Report Server to listen for external URLs. The external URLs simply aren't working when called from the server itself. I hope this makes sense. Thanks for taking the time to read this rather verbose plea for help.

    Read the article

  • Choice and setup of version control

    - by Peter M
    I am about to set up an new laptop and in the process transition to a new version control system as part of a general cleanup. Currently I use a centralized version control system (yes it is VSS, and yes I know all the pro's and con's of that system, but as a single user system it works well for me). I have very little requirements for a new system and I am free to choose among any of the current mainstream players, but cost constraints will push me towards oss. Some of my requirements are: Runs on a single machine (ie the laptop in question) under windows I am not sharing things with other developers or workers - this is more for my own historical benefits. I want to version source code, documentation and binary files I have a large hierarchy of projects that are unrelated (see below) I have files within the hierarchy that don't need to be controlled (but could be) Some projects use Visual Studio, so some integration there could be nice. There could be some sharing of files between jobs. I generally only need a small about of branching in code files The directory hierarchy that I have at the moment is somewhat like: Root | |--Customer #1 | | | |--Job #1 | | | | | |--Data files received from Customer for Job (not controlled) | | |--Documentation files (controlled) | | |--Project information files (not controlled - but could be) | | |--Software Project Files (controlled) | | |--Scratch dir for job (not controlled) | | | |--Job #2 | | (same structure as above) | |--Customer #2 | |.. | |--Cusmtomer #n |.. Currently I have about 22 customers with differing numbers of projects underneath them. At the moment I have a single VSS repository based at the root of the directory structure. If I kept with a centralized system (ie SVN) I believe that I should keep the same approach and continue with a single repository based from the root dir. Is this a valid approach? However if I move to a distributed tool then I am unsure of how I should handle the situation. My initial guess is that I should not have a repository based on the root of my entire directory structure - but that is a guess so I really don't know how valid it is. Should I pitch a distributed approach at the Root, Customer, Job or sub-Job directory level? Also what I am not clear on with distributed tools (and perhaps with SVN as well), is if I can branch parts of a repository. For example, I can see branching source code in software projects as being useful, but branching my documentation as not being useful. So if I pitch a repository at the Job level, can I just branch the Software Project Files? Or would all files in that Job be branched? Every time I look at distributed tools I get a nagging feeling that they are not suited to my style of setup. I am uncomfortable with idea of having to manually set up something like 50 to 80 separate repositories (if I pitch at the Job level, or 20+ if at the Customer level) within my directory hierarchy. This feeling also extends to having all those repositories scattered around as well - however I do have a backup strategy that I trust, so this latter feeling is pretty well unfounded. So what advice can you all give me? Thanks in advance!

    Read the article

  • FFMPEG dropping frames while encoding JPEG sequence at color change

    - by Matt
    I'm trying to put together a slide show using imagemagick and FFMPEG. I use imagemagick to expand a single photo into 30fps video (imagemagick also handles things like putting some text captions on the frames along the way). When I go to let ffmpeg digest it into a video it clips along nicely on the color parts of the video, but when it gets to a black and white section it reports "frame= 2030 fps=102 q=32766.0 Lsize= 5203kB time=00:01:07.60 bitrate= 630.5kbits/s dup=0 drop=703" and drops every frame of video until it hits something with color. As you can imagine this results in entire photos being removed from the slideshow. Here is my latest dump... ffmpeg -y -r 30 -i "teststream/%06d.jpg" -c:v libx264 -r 30 newffmpeg.mp4 ffmpeg version git-2012-12-10-c3bb333 Copyright (c) 2000-2012 the FFmpeg developers built on Dec 10 2012 22:02:04 with gcc 4.6.1 (Ubuntu/Linaro 4.6.1-9ubuntu3) configuration: --enable-gpl --enable-libfaac --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-librtmp --enable-libtheora --enable-libvorbis --enable-libx264 --enable-nonfree --enable-version3 libavutil 52. 12.100 / 52. 12.100 libavcodec 54. 79.101 / 54. 79.101 libavformat 54. 49.100 / 54. 49.100 libavdevice 54. 3.102 / 54. 3.102 libavfilter 3. 26.101 / 3. 26.101 libswscale 2. 1.103 / 2. 1.103 libswresample 0. 17.102 / 0. 17.102 libpostproc 52. 2.100 / 52. 2.100 Input #0, image2, from 'teststream/%06d.jpg': Duration: 00:12:02.80, start: 0.000000, bitrate: N/A Stream #0:0: Video: mjpeg, yuvj444p, 720x480 [SAR 72:72 DAR 3:2], 25 fps, 25 tbr, 25 tbn, 25 tbc [libx264 @ 0x3450140] using SAR=1/1 [libx264 @ 0x3450140] using cpu capabilities: MMX2 SSE2Fast SSSE3 FastShuffle SSE4.2 [libx264 @ 0x3450140] profile High, level 3.0 [libx264 @ 0x3450140] 264 - core 129 r2 1cffe9f - H.264/MPEG-4 AVC codec - Copyleft 2003-2012 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=12 lookahead_threads=2 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00 Output #0, mp4, to 'newffmpeg.mp4': Metadata: encoder : Lavf54.49.100 Stream #0:0: Video: h264 ([33][0][0][0] / 0x0021), yuvj420p, 720x480 [SAR 1:1 DAR 3:2], q=-1--1, 15360 tbn, 30 tbc Stream mapping: Stream #0:0 - #0:0 (mjpeg - libx264) Press [q] to stop, [?] for help Input stream #0:0 frame changed from size:720x480 fmt:yuvj444p to size:720x480 fmt:yuvj422p Input stream #0:0 frame changed from size:720x480 fmt:yuvj422p to size:720x480 fmt:yuvj444pp=584 frame= 2030 fps=102 q=32766.0 Lsize= 5203kB time=00:01:07.60 bitrate= 630.5kbits/s dup=0 drop=703 video:5179kB audio:0kB subtitle:0 global headers:0kB muxing overhead 0.472425% [libx264 @ 0x3450140] frame I:9 Avg QP:20.10 size: 33933 [libx264 @ 0x3450140] frame P:636 Avg QP:24.12 size: 6737 [libx264 @ 0x3450140] frame B:1385 Avg QP:27.04 size: 514 [libx264 @ 0x3450140] consecutive B-frames: 2.5% 15.2% 13.2% 69.2% [libx264 @ 0x3450140] mb I I16..4: 8.3% 80.3% 11.5% [libx264 @ 0x3450140] mb P I16..4: 1.5% 2.5% 0.2% P16..4: 41.7% 18.0% 10.3% 0.0% 0.0% skip:25.9% [libx264 @ 0x3450140] mb B I16..4: 0.0% 0.0% 0.0% B16..8: 26.6% 0.6% 0.1% direct: 0.2% skip:72.3% L0:35.0% L1:60.3% BI: 4.7% [libx264 @ 0x3450140] 8x8 transform intra:64.1% inter:75.1% [libx264 @ 0x3450140] coded y,uvDC,uvAC intra: 51.6% 78.0% 43.7% inter: 10.6% 14.9% 2.1% [libx264 @ 0x3450140] i16 v,h,dc,p: 29% 19% 6% 46% [libx264 @ 0x3450140] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 23% 15% 17% 5% 9% 10% 7% 8% 6% [libx264 @ 0x3450140] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 31% 18% 11% 5% 9% 10% 6% 6% 4% [libx264 @ 0x3450140] i8c dc,h,v,p: 46% 18% 24% 12% [libx264 @ 0x3450140] Weighted P-Frames: Y:20.1% UV:18.7% [libx264 @ 0x3450140] ref P L0: 59.2% 23.2% 13.1% 4.3% 0.2% [libx264 @ 0x3450140] ref B L0: 88.7% 8.3% 3.0% [libx264 @ 0x3450140] ref B L1: 95.0% 5.0% [libx264 @ 0x3450140] kb/s:626.88 Received signal 2: terminating. One last note: If I remove the -r 30 from the input and output it works flawlessly. I have no idea why the -r 30 is causing it to freak out.

    Read the article

< Previous Page | 328 329 330 331 332 333 334 335 336 337 338 339  | Next Page >