Search Results

Search found 100454 results on 4019 pages for 'sql server 2011'.

Page 584/4019 | < Previous Page | 580 581 582 583 584 585 586 587 588 589 590 591  | Next Page >

  • Auto blocking attacking IP address

    - by dong
    This is to share my PowerShell code online. I original asked this question on MSDN forum (or TechNet?) here: http://social.technet.microsoft.com/Forums/en-US/winserversecurity/thread/f950686e-e3f8-4cf2-b8ec-2685c1ed7a77 In short, this is trying to find attacking IP address then add it into Firewall block rule. So I suppose: 1, You are running a Windows Server 2008 facing the Internet. 2, You need to have some port open for service, e.g. TCP 21 for FTP; TCP 3389 for Remote Desktop. You can see in my code I’m only dealing with these two since that’s what I opened. You can add further port number if you like, but the way to process might be different with these two. 3, I strongly suggest you use STRONG password and follow all security best practices, this ps1 code is NOT for adding security to your server, but reduce the nuisance from brute force attack, and make sys admin’s life easier: i.e. your FTP log won’t hold megabytes of nonsense, your Windows system log will not roll back and only can tell you what happened last month. 4, You are comfortable with setting up Windows Firewall rules, in my code, my rule has a name of “MY BLACKLIST”, you need to setup a similar one, and set it to BLOCK everything. 5, My rule is dangerous because it has the risk to block myself out as well. I do have a backup plan i.e. the DELL DRAC5 so that if that happens, I still can remote console to my server and reset the firewall. 6, By no means the code is perfect, the coding style, the use of PowerShell skills, the hard coded part, all can be improved, it’s just that it’s good enough for me already. It has been running on my server for more than 7 MONTHS. 7, Current code still has problem, I didn’t solve it yet, further on this point after the code. :)    #Dong Xie, March 2012  #my simple code to monitor attack and deal with it  #Windows Server 2008 Logon Type  #8: NetworkCleartext, i.e. FTP  #10: RemoteInteractive, i.e. RDP    $tick = 0;  "Start to run at: " + (get-date);    $regex1 = [regex] "192\.168\.100\.(?:101|102):3389\s+(\d+\.\d+\.\d+\.\d+)";  $regex2 = [regex] "Source Network Address:\t(\d+\.\d+\.\d+\.\d+)";    while($True) {   $blacklist = @();     "Running... (tick:" + $tick + ")"; $tick+=1;    #Port 3389  $a = @()  netstat -no | Select-String ":3389" | ? { $m = $regex1.Match($_); `    $ip = $m.Groups[1].Value; if ($m.Success -and $ip -ne "10.0.0.1") {$a = $a + $ip;} }  if ($a.count -gt 0) {    $ips = get-eventlog Security -Newest 1000 | Where-Object {$_.EventID -eq 4625 -and $_.Message -match "Logon Type:\s+10"} | foreach { `      $m = $regex2.Match($_.Message); $ip = $m.Groups[1].Value; $ip; } | Sort-Object | Tee-Object -Variable list | Get-Unique    foreach ($ip in $a) { if ($ips -contains $ip) {      if (-not ($blacklist -contains $ip)) {        $attack_count = ($list | Select-String $ip -SimpleMatch | Measure-Object).count;        "Found attacking IP on 3389: " + $ip + ", with count: " + $attack_count;        if ($attack_count -ge 20) {$blacklist = $blacklist + $ip;}      }      }    }  }      #FTP  $now = (Get-Date).AddMinutes(-5); #check only last 5 mins.     #Get-EventLog has built-in switch for EventID, Message, Time, etc. but using any of these it will be VERY slow.  $count = (Get-EventLog Security -Newest 1000 | Where-Object {$_.EventID -eq 4625 -and $_.Message -match "Logon Type:\s+8" -and `              $_.TimeGenerated.CompareTo($now) -gt 0} | Measure-Object).count;  if ($count -gt 50) #threshold  {     $ips = @();     $ips1 = dir "C:\inetpub\logs\LogFiles\FPTSVC2" | Sort-Object -Property LastWriteTime -Descending `       | select -First 1 | gc | select -Last 200 | where {$_ -match "An\+error\+occured\+during\+the\+authentication\+process."} `        | Select-String -Pattern "(\d+\.\d+\.\d+\.\d+)" | select -ExpandProperty Matches | select -ExpandProperty value | Group-Object `        | where {$_.Count -ge 10} | select -ExpandProperty Name;       $ips2 = dir "C:\inetpub\logs\LogFiles\FTPSVC3" | Sort-Object -Property LastWriteTime -Descending `       | select -First 1 | gc | select -Last 200 | where {$_ -match "An\+error\+occured\+during\+the\+authentication\+process."} `        | Select-String -Pattern "(\d+\.\d+\.\d+\.\d+)" | select -ExpandProperty Matches | select -ExpandProperty value | Group-Object `        | where {$_.Count -ge 10} | select -ExpandProperty Name;     $ips += $ips1; $ips += $ips2; $ips = $ips | where {$_ -ne "10.0.0.1"} | Sort-Object | Get-Unique;         foreach ($ip in $ips) {       if (-not ($blacklist -contains $ip)) {        "Found attacking IP on FTP: " + $ip;        $blacklist = $blacklist + $ip;       }     }  }        #Firewall change <# $current = (netsh advfirewall firewall show rule name="MY BLACKLIST" | where {$_ -match "RemoteIP"}).replace("RemoteIP:", "").replace(" ","").replace("/255.255.255.255",""); #inside $current there is no \r or \n need remove. foreach ($ip in $blacklist) { if (-not ($current -match $ip) -and -not ($ip -like "10.0.0.*")) {"Adding this IP into firewall blocklist: " + $ip; $c= 'netsh advfirewall firewall set rule name="MY BLACKLIST" new RemoteIP="{0},{1}"' -f $ip, $current; Invoke-Expression $c; } } #>    foreach ($ip in $blacklist) {    $fw=New-object –comObject HNetCfg.FwPolicy2; # http://blogs.technet.com/b/jamesone/archive/2009/02/18/how-to-manage-the-windows-firewall-settings-with-powershell.aspx    $myrule = $fw.Rules | where {$_.Name -eq "MY BLACKLIST"} | select -First 1; # Potential bug here?    if (-not ($myrule.RemoteAddresses -match $ip) -and -not ($ip -like "10.0.0.*"))      {"Adding this IP into firewall blocklist: " + $ip;         $myrule.RemoteAddresses+=(","+$ip);      }  }    Wait-Event -Timeout 30 #pause 30 secs    } # end of top while loop.   Further points: 1, I suppose the server is listening on port 3389 on server IP: 192.168.100.101 and 192.168.100.102, you need to replace that with your real IP. 2, I suppose you are Remote Desktop to this server from a workstation with IP: 10.0.0.1. Please replace as well. 3, The threshold for 3389 attack is 20, you don’t want to block yourself just because you typed your password wrong 3 times, you can change this threshold by your own reasoning. 4, FTP is checking the log for attack only to the last 5 mins, you can change that as well. 5, I suppose the server is serving FTP on both IP address and their LOG path are C:\inetpub\logs\LogFiles\FPTSVC2 and C:\inetpub\logs\LogFiles\FPTSVC3. Change accordingly. 6, FTP checking code is only asking for the last 200 lines of log, and the threshold is 10, change as you wish. 7, the code runs in a loop, you can set the loop time at the last line. To run this code, copy and paste to your editor, finish all the editing, get it to your server, and open an CMD window, then type powershell.exe –file your_powershell_file_name.ps1, it will start running, you can Ctrl-C to break it. This is what you see when it’s running: This is when it detected attack and adding the firewall rule: Regarding the design of the code: 1, There are many ways you can detect the attack, but to add an IP into a block rule is no small thing, you need to think hard before doing it, reason for that may include: You don’t want block yourself; and not blocking your customer/user, i.e. the good guy. 2, Thus for each service/port, I double check. For 3389, first it needs to show in netstat.exe, then the Event log; for FTP, first check the Event log, then the FTP log files. 3, At three places I need to make sure I’m not adding myself into the block rule. –ne with single IP, –like with subnet.   Now the final bit: 1, The code will stop working after a while (depends on how busy you are attacked, could be weeks, months, or days?!) It will throw Red error message in CMD, don’t Panic, it does no harm, but it also no longer blocking new attack. THE REASON is not confirmed with MS people: the COM object to manage firewall, you can only give it a list of IP addresses to the length of around 32KB I think, once it reaches the limit, you get the error message. 2, This is in fact my second solution to use the COM object, the first solution is still in the comment block for your reference, which is using netsh, that fails because being run from CMD, you can only throw it a list of IP to 8KB. 3, I haven’t worked the workaround yet, some ideas include: wrap that RemoteAddresses setting line with error checking and once it reaches the limit, use the newly detected IP to be the list, not appending to it. This basically reset your block rule to ground zero and lose the previous bad IPs. This does no harm as it sounds, because given a certain period has passed, any these bad IPs still not repent and continue the attack to you, it only got 30 seconds or 20 guesses of your password before you block it again. And there is the benefit that the bad IP may turn back to the good hands again, and you are not blocking a potential customer or your CEO’s home pc because once upon a time, it’s a zombie. Thus the ZEN of blocking: never block any IP for too long. 4, But if you insist to block the ugly forever, my other ideas include: You call MS support, ask them how can we set an arbitrary length of IP addresses in a rule; at least from my experiences at the Forum, they don’t know and they don’t care, because they think the dynamic blocking should be done by some expensive hardware. Or, from programming perspective, you can create a new rule once the old is full, then you’ll have MY BLACKLIST1, MY  BLACKLIST2, MY BLACKLIST3, … etc. Once in a while you can compile them together and start a business to sell your blacklist on the market! Enjoy the code! p.s. (PowerShell is REALLY REALLY GREAT!)

    Read the article

  • Log shipping and shrinking transaction logs

    - by DavidWimbush
    I just solved a problem that had me worried for a bit. I'm log shipping from three primary servers to a single secondary server, and the transaction log disk on the secondary server was getting very full. I established that several primary databases had unused space that resulted from big, one-off updates so I could shrink their logs. But would this action be log shipped and applied to the secondary database too? I thought probably not. And, more importantly, would it break log shipping? My secondary databases are in a Standby / Read Only state so I didn't think I could shrink their logs. I RTFMd, Googled, and asked on a Q&A site (not the evil one) but was none the wiser. So I was facing a monumental round of shrink, full backup, full secondary restore and re-start log shipping (which would leave us without a disaster recovery facility for the duration). Then I thought it might be worthwhile to take a non-essential database and just make absolutely sure a log shrink on the primary wouldn't ship over and occur on the secondary as well. So I did a DBCC SHRINKFILE and kept an eye on the secondary. Bingo! Log shipping didn't blink and the log on the secondary shrank too. I just love it when something turns out even better than I dared to hope. (And I guess this highlights something I need to learn about what activities are logged.)

    Read the article

  • Runaway version store in tempdb

    - by DavidWimbush
    Today was really a new one. I got back from a week off and found our main production server's tempdb had gone from its usual 200MB to 36GB. Ironically I spent Friday at the most excellent SQLBits VI and one of the sessions I attended was Christian Bolton talking about tempdb issues - including runaway tempdb databases. How just-in-time was that?! I looked into the file growth history and it looks like the problem started when my index maintenance job was chosen as the deadlock victim. (Funny how they almost make it sound like you've won something.) That left tempdb pretty big but for some reason it grew several more times. And since I'd left the file growth at the default 10% (aaargh!) the worse it got the worse it got. The last regrowth event was 2.6GB. Good job I've got Instant Initialization on. Since the Disk Usage report showed it was 99% unallocated I went into the Shrink Files dialogue which helpfully informed me the data file was 250MB.  I'm afraid I've got a life (allegedly) so I restarted the SQL Server service and then immediately ran a script to make the initial size bigger and change the file growth to a number of MB. The script complained that the size was smaller than the current size. Within seconds! WTF? Now I had to find out what was using so much of it. By using the DMV sys.dm_db_file_space_usage I found the problem was in the version store, and using the DMV sys.dm_db_task_space_usage and the Top Transactions by Age report I found that the culprit was a 3rd party database where I had turned on read_committed_snapshot and then not bothered to monitor things properly. Just because something has always worked before doesn't mean it will work in every future case. This application had an implicit transaction that had been running for over 2 hours.

    Read the article

  • sp_ssiscatalog v1.0.1.0 now available for download

    - by jamiet
    13 days ago I wrote a blog post entitled Introducing sp_ssiscatalog (v1.0.0.0) in which I first made mention of sp_ssiscatalog, an open source stored procedure intended to make it easy to query the SSIS Catalog. I have been working on some enhancements since then and hence v1.0.1.0 is now available for download from Codeplex. What’s new in this release This release includes the following enhancements: [execution_id] now gets returned in a call to EXEC [dbo].[sp_ssiscatalog] @operation_type='exec'; Filter events by specifying packages to ignore EXEC [dbo].[sp_ssiscatalog] @operation_type='exec',@exec_events_packagesexcluded='SomePackage.dtsx,AnotherPackage.dtsx'; [event_message_id] is now returned in a list of events List of executions can now be filtered via a minimum and maximum execution_id EXEC [dbo].[sp_ssiscatalog] @operation_type='execs',@execs_minimum_execution_id=198,@execs_maximum_execution_id=201 Events resultsets now have a field, [event_message_context_xml] that contains an XML document containing all [event_message_context] info (if any exists) Installation instructions Download the zip file at DB v1.0.1.0. It contains two files, SsisReportingPack.dacpac & SSISDB.dacpac Unzip to a folder of your choosing Open a command prompt and change to the directory into which you unzipped the files Execute: "%PROGRAMFILES(x86)%\Microsoft SQL Server\110\DAC\bin\sqlpackage.exe" /a:Publish /tdn:SsisReportingPack /sf:SSISReportingPack.dacpac /v:SSISDB=SSISDB /tsn:(local) (/tsn specifies the target server. Change as appropriate.) If everything works OK you’ll see something like the following: or depending on whether the target database already exists or not This will create a database called [SsisReportingPack] which contains [dbo].[sp_ssiscatalog] Feedback is welcomed! @Jamiet

    Read the article

  • Random DNS Client Issue with BIND9/Windows Server 2003 DNS

    - by upkels
    Within our office, we have a local server running DNS, for internal related "domains", (e.g. .internal, .office, .lan, .vpn, etc.). Randomly, only the hosts configured with those extensions will stop resolving on the Windows-based workstations. Sometimes it'll work for a couple weeks without issue on one machine, then suddenly stop working, or it'll happen on another 15 times per day. It's completely random for all workstations. When troubleshooting, I have opened up a command prompt, and issued various nslookup commands for some of these hosts, and they resolve, however I've been told that nslookup uses different "libraries" for name resolution than other applications such as web browsers, email clients, etc. The only solution thus far, is manually restarting the Windows DNS Client on each workstation when this happens. Issuing the ipconfig /flushdns command multiple times helps every now and then, but is not successful enough to even attempt before restarting the DNS Client. I have tried two different DNS servers; BIND9, and Windows Server 2003 R2 DNS, and the behavior is the same. We have a single Netgear JGS524 switch all workstations and servers are connected to within the office, and a Linksys SR224G switch in another department with workstations attached.

    Read the article

  • Windows server 2003 default administrator password

    - by Jason Baker
    Sorry if this is an overly simplistic question, but I'm a bit stuck here. :) I need a windows machine for me to do some programming for class. Since I have my Macbook with me everywhere I go, I figured that it would be easiest to install a vm. And since I can get a copy of Windows server 2k3 for free via dreamspark, I thought I'd try to do that. Here's what happened though: I installed windows server (disk one). When the system booted up, vmware automatically installed VMWare tools and prompted me to restart. There was also a prompt to start the installation of disc 2, but I figured it would be better to restart before doing that. When the machine came back up, I was prompted to log in as the administrator. The problem is that I wasn't prompted to make an administrator account or password. Is there a default password I can use? I've tried all the obvious ones (blank, password, etc) and googling, but I didn't come up with anything.

    Read the article

  • Deploying an MVC 2.0 .Net 4.0 application to Windows Server 2003

    - by Brett Rigby
    I've spent the past couple of nights on this, and have come to the conclusions that I'm doing something wrong! Basically, I have my .Net 4.0 MVC 2.0 application deployed to my 2003 Server and it's constantly giving me the following error: The value for the 'compilerVersion' attribute in the provider options must be 'v4.0' or later if you are compiling for version 4.0 or later of the .NET Framework. To compile this Web application for version 3.5 or earlier of the .NET Framework, remove the 'targetFramework' attribute from the element of the Web.config file. I also have the .Net 3.5 MVC 1.0 application also deployed to the same server (from before the upgrade to .Net 4.0), and it works absolutely fine. I have re-run through Phil Haack's post on IIS6, making the necessary changes for .Net 4.0, but no luck. I have been through the steps in the British Developer's post too, but no joy. I have also tried the steps outlined in Johan's blog post, but the settings he mentions were already enabled. Obviously, I can't simply upgrade to a 2008 box, nor do I want to downgrade my software back to < 4.0 - any other ideas?

    Read the article

  • VB.Net plugin using Matlab COM Automation Server...Error: 'Could not load Interop.MLApp'

    - by Ben
    My Problem: I am using Matlab COM Automation Server to call and execute matlab .m files from a VB.Net plugin for a CAD program called Rhino 3D. The code works flawlessly when set up as a simple Windows Application in Visual Studio, but when I insert it (and make the requisite reference) into my .Net plugin and test it in the CAD program I get the following error: "Could not load file or assembly 'Interop.MLApp, Version 1.0.0.0, culture=neutral, PublicKeyToken=null' or one of its dependencies. the system cannot find the file specified." What I've Tried: I am baffled as to why this occurs, but I was able to contact the CAD program's technical support staff and they suggested that it has something to do with their DotNet SDK having trouble with references that are located far outside the CAD program directory. They didn't have any solutions so I tried playing around with copylocal and this made no difference. I tried using other COM libraries and the Open Office automation server works fine, although uses url's instead of requiring a reference. I also tested Excel, which does require a reference, and it returned the error: "retrieving the COM class factory for component with CLSID {...} failed due to the following error: 80040154." This may or may not be related to the issue with the Matlab COM reference, but I thought was worthwhile to share. Perhaps is there another way to reference Interop.MLApp? I would appreciate any suggestions or thoughts on how I might make the Matlab Interop.MLApp reference work. Best regards, Ben

    Read the article

  • What is a good embeddable Java LDAP server?

    - by LeedsSideStreets
    I'm working on a Java web application that integrates with a few other external applications that are deployed along with it. Authentication information must be synchronized across everything and the other applications want to authenticate against LDAP. The application will be deployed in environments where there will be no other LDAP server for everything to use; I have to provide it. My solution so far has been to use Penrose Server as a standalone app, which I set up to examine tables in the main application's database and publish LDAP based on that. It works well, but it would be nice to have something that can be embedded into the main application itself to simplify deployment. It looks like Penrose can be embedded, but the documentation can be a bit spotty or out-of-date (though it seems to be actively developed). It could be an acceptable solution, but if there is another out there that is known to work well in an embedded configuration I might want to check it out. I'm also concerned about GPL issues with Penrose. I'm not at liberty to GPL the source code for the application. I don't believe it was an issue running it standalone, but embedding it may be no-no... anybody know for sure? A permissive license would be good in order to avoid these issues. Requirements: LDAP v3 Must be able to be have the directory contents updated while running, either programmatically or by another means like syncing with the database as Penrose does Easy to configure (no additional configuration for the app at deployment time would be ideal) So far I've briefly looked at ApacheDS and OpenDS which seem to be embeddable. Does anyone have experience with this kind of thing?

    Read the article

  • How to build a GridView like DataBound Templated Custom ASP.NET Server Control

    - by Deyan
    I am trying to develop a very simple templated custom server control that resembles GridView. Basically, I want the control to be added in the .aspx page like this: <cc:SimpleGrid ID="SimpleGrid1" runat="server"> <TemplatedColumn>ID: <%# Eval("ID") %></ TemplatedColumn> <TemplatedColumn>Name: <%# Eval("Name") %></ TemplatedColumn> <TemplatedColumn>Age: <%# Eval("Age") %></ TemplatedColumn> </cc:SimpleGrid> and when providing the following datasource: DataTable table = new DataTable(); table.Columns.Add("ID", typeof(int)); table.Columns.Add("Name", typeof(string)); table.Columns.Add("Age", typeof(int)); table.Rows.Add(1, "Bob", 35); table.Rows.Add(2, "Sam", 44); table.Rows.Add(3, "Ann", 26); SimpleGrid1.DataSource = table; SimpleGrid1.DataBind(); the result should be a HTML table like this one. ------------------------------- | ID: 1 | Name: Bob | Age: 35 | ------------------------------- | ID: 2 | Name: Sam | Age: 44 | ------------------------------- | ID: 3 | Name: Ann | Age: 26 | ------------------------------- The main problem is that I cannot define the TemplatedColumn. When I tried to do it like this ... private ITemplate _TemplatedColumn; [PersistenceMode(PersistenceMode.InnerProperty)] [TemplateContainer(typeof(TemplatedColumnItem))] [TemplateInstance(TemplateInstance.Multiple)] public ITemplate TemplatedColumn { get { return _TemplatedColumn; } set { _TemplatedColumn = value; } } .. and subsequently instantiate the template in the CreateChildControls I got the following result which is not what I want: ----------- | Age: 35 | ----------- | Age: 44 | ----------- | Age: 26 | ----------- I know that what I want to achieve is pointless and that I can use DataGrid to achieve it, but I am giving this very simple example because if I know how to do this I would be able to develop the control that I need. Thank you.

    Read the article

  • problem with ajax on the hosting server

    - by nelly
    when I Implemented chatting Function , I use Ajax to send messages between file to another . so , it is working well on local host . but , when I upload it in to remote server it doesn't work. can U tell me ,why ? is an Ajax need Special configuration ? there is my files that I used : Ajax .js file witch has "ajax_send" function that i used in chatbox.js file chatbox.js file wich consest of functions i used it to send data from php file to another one and it display the state (any user sign in or new sending message and so on ..) user.php file whitch responseble to write user name in the text file usersonline.txt and then display the online users in the online users column. send.php file that write on room1.text recive.php file that read room1.txt and then write the content into the chat box I beleve that the problem comes from the ajax code in Ajax.js File so please help me find out the problem and how to solve it. is ajax needs special server settings ? because it was working at the local host !

    Read the article

  • Terminal / Panel PC - Single Server Solution: Client/Server or RDP?

    - by StillLearning
    Hi, Our current setup involves a touch screen panel pc with embedded windows, that is connected via network to a server / dedicated pc, within the same physical location. Each of our 'units' has this hardware setup. For a quick resolution we deploy our application to the dedicated pc, and have the panel pc remote desktop to an account which then activates the application. This works but seems a little clunky / rough approach. We did this because the panel pc is rather limited. Now that we have more time, I was wondering if I should separate the application into a gui / application. Deploy the gui logic on the panel pc, and the business/database logic on the dedicated pc. The app is in Java so I was wondering what technology would be best? I was thinking of using RMI, but its not really a client/server app, as there is only one client. Should I stick with RMI, or use Sockets or something else? It will be easy to implement as the application is old, and manually wraps and unwraps data which passes through one class / method call to remote services. All I would have to do is 'RMI' this one method call, and the app will do its own stuff. Cheers.

    Read the article

  • Java Stop Server Thread

    - by ikurtz
    the following code is server code in my app: private int serverPort; private Thread serverThread = null; public void networkListen(int port){ serverPort = port; if (serverThread == null){ Runnable serverRunnable = new ServerRunnable(); serverThread = new Thread(serverRunnable); serverThread.start(); } else { } } public class ServerRunnable implements Runnable { public void run(){ try { //networkConnected = false; //netMessage = "Listening for Connection"; //networkMessage = new NetworkMessage(networkConnected, netMessage); //setChanged(); //notifyObservers(networkMessage); ServerSocket serverSocket = new ServerSocket(serverPort, backlog); commSocket = serverSocket.accept(); serverSocket.close(); serverSocket = null; //networkConnected = true; //netMessage = "Connected: " + commSocket.getInetAddress().getHostAddress() + ":" + //commSocket.getPort(); //networkMessage = new NetworkMessage(networkConnected, netMessage); //setChanged(); //notifyObservers(networkMessage); } catch (IOException e){ //networkConnected = false; //netMessage = "ServerRunnable Network Unavailable"; //System.out.println(e.getMessage()); //networkMessage = new NetworkMessage(networkConnected, netMessage); //setChanged(); //notifyObservers(networkMessage); } } } The code sort of works i.e. if im attempting a straight connection both ends communicate and update. The issue is while im listening for a connection if i want to quit listening then the server thread continues running and causes problems. i know i should not use .stop() on a thread so i was wondering what the solution would look like with this in mind? EDIT: commented out unneeded code.

    Read the article

  • Sending files from server to client in Java

    - by Lee Jacobson
    Hi, I'm trying to find a way to send files of different file types from a server to a client. I have this code on the server to put the file into a byte array: File file = new File(resourceLocation); byte[] b = new byte[(int) file.length()]; FileInputStream fileInputStream; try { fileInputStream = new FileInputStream(file); try { fileInputStream.read(b); } catch (IOException ex) { System.out.println("Error, Can't read from file"); } for (int i = 0; i < b.length; i++) { fileData += (char)b[i]; } } catch (FileNotFoundException e) { System.out.println("Error, File Not Found."); } I then send fileData as a string to the client. This works fine for txt files but when it comes to images I find that although it creates the file fine with the data in, the image won't open. I'm not sure if I'm even going about this the right way. Thanks for the help.

    Read the article

  • Compiling a C++ application on Windows 7, but execute it on Win2003 Server

    - by dabs
    I have a C++ application (quite complex, multiple projects) in Visual Studio 2008, that produces a single dll. Recently I switched to Windows 7, but had previously been compiling under Windows XP. Suddenly the dll in question cannot be loaded by another application, i.e. on a machine running Windows 2003 Server. I've been trying various things: I've installed the VC9.0 redistributable package on the server Also copied various .dll's from that package to the application folder The project is of course compiled in release mode When I run depends.exe on the client machine, I do get the following error: "Error: The Side-by-Side configuration information for "my_dll.dll" contains errors. This application has failed to start because the application configuration is incorrect. Reinstalling the application may fix this problem (14001). Warning: At least one module has an unresolved import due to a missing export function in a delay-load dependent module." and the icon for shlwapi.dll has a red overlay icon. This didn't happen when I was compiling under WinXP, so I'm guessing that there really is no problem with the .dll's on the client machine, but somewhere there is a reference to that particular version of some dll. Does anyone know what would be the best way to resolve this? Regards, Daníel

    Read the article

  • Windows Server 2008 R2 File Permissions

    - by Fly_Trap
    I’m having some problems understanding some particular file permissions behaviour. Here are the steps to reproduce: Log into the server using the default Administrator account Create a text file (testfile.txt) in C:\ProgramData containing some arbitrary text Create a new user account and make it a member of the Administrators group Log in using new account and open C:\ProgramData\testfile.txt Edit the text and try to save Upon clicking save I’m presented with the save as dialog, which indicates that i do not have the necessary permissions to edit the file. This seems odd considering that the new user account is a member of Administrators. When I view the permissions of the file I can see the there are three groups listed, System, Administrators and Users. SYSTEM and Administrators have full permissions, however, Users only has the Read & Execute and Read permissions checked. It would appear that when I open the testfile.txt from the new users account, it opens in the context of the Users group, despite being a member of Administrators, is this correct? It would certainly explain the behaviour. The reason that this is an issue for me is that if I deploy an application via 'Run as Administrator', will normal users be able to edit the text files I install to ProgramData. Is this behaviour confined to Windows server or is it the same in Vista and Win7.

    Read the article

  • Server side aggregation using Spring MVC Controllers

    - by Venu
    Hi! We have a web site with multiple pages each with lot of AJAX calls. All the Ajax calls go to a data proxy layer written in Spring 3 MVC application. In order to reduce the load on the server we are planning to aggregate calls on the client and send to the proxy layer. For ex: we have two calls /controller1/action1 and /controller2/action2 in different controllers. I want to write an aggregate controller and call it from the client instead of making two calls to the individual controllers. Something like /aggregatecontroller/aggregate (and we will post the information regarding which calls are being aggregated and required parameters). Is there any way we can call another controller/action from a controller/action and get output? The flow will be something like this: a. Call /aggregatecontroller/aggregate with info regd call1 and call2 b. aggregate action understands that it needs to aggregate call1 and call2 c. it calls /controller1/action1 and gets the response d. it calls /controller2/action2 and gets the response e. it merges both in a json response and sends it back to browser. Please let me know if you have any suggestion regd how to do this or if you think there is a better approach to do the server side aggregation. Thanks for your time.

    Read the article

  • WPF C# Client/Server announcement system

    - by manemawanna
    I'm currently in the process of creating an announcement system at my place of work. The role of this system will be to replace all users email due to people misusing it and generally abusing the facility. The system will consist of: Web Portal: Will allow staff to enter any important announcements (this will be restricted via AD). SQL Server 2k5 DB: Will hold the announcements along with records of staff members and if they've read the announcements etc. Front End: Created in WPF & C# which is nearly complete, it will display the announcements to the users. Web Page: Client will contact every so often, which will return an xml file for the client to read. However my boss has now shifted the goal posts and would like the announcements to appear to the user once they are written to the database, rather than waiting on the client to contact the webpage. So now I'm a bit unsure as to how to go about this. I have one idea where I would create a small server application to monitor for new announcements then contact the clients to inform them to approach the website for the information they need. But I'm just looking to see if theres a better or more efficient way to do this or if someone else has a more appropriate idea or suggestion.

    Read the article

  • Thumbnail image saved with worse quality on Windows Server 2003

    - by Angelo
    Hello, In asp.net 2.0 application I am trying to create thumbnails from uploaded images. However when I test the application on my PC under Windows7 it works fine, but on the real Windows 2003 Server the resized image has worse quality. Where this difference could come from? Different JPEG codec or what, if Yes how it can be updated on Win 2003 Server? Thanks! Here is the code: Resize of the Image: Bitmap newBmp = new Bitmap(imgWidth, imgHeight, PixelFormat.Format24bppRgb); newBmp.SetResolution(inputBmp.HorizontalResolution, inputBmp.VerticalResolution); //Create a graphics object attached to the new bitmap Graphics newBmpGraphics = Graphics.FromImage(newBmp); newBmpGraphics.InterpolationMode = InterpolationMode.HighQualityBicubic; newBmpGraphics.SmoothingMode = SmoothingMode.HighQuality; newBmpGraphics.PixelOffsetMode = PixelOffsetMode.HighQuality; newBmpGraphics.DrawImage(inputBmp, new Rectangle(0, 0, imgWidth, imgHeight), new Rectangle(0, 0, inputBmp.Width, inputBmp.Height), GraphicsUnit.Pixel); Save of the Image: System.IO.Stream imgStream = new System.IO.MemoryStream(); //Get the ImageCodecInfo for the desired target format ImageCodecInfo destCodec = FindCodecForType(ImageMimeTypes.JPEG); if (destCodec == null) { //No codec available for that format throw new ArgumentException("The requested format image/jpeg does not have an available codec installed", "destFormat"); } //Create an EncoderParameters collection to contain the //parameters that control the dest format's encoder EncoderParameters destEncParams = new EncoderParameters(1); EncoderParameter qualityParam = new EncoderParameter(System.Drawing.Imaging.Encoder.Quality,(long)quality); destEncParams.Param[0] = qualityParam; //Save w/ the selected codec and encoder parameters inputBmp.Save(imgStream, destCodec, destEncParams); Bitmap destBitmap = new Bitmap(imgStream);

    Read the article

  • jquery dialog calls a server side method with button parameters

    - by Chiefy
    Hello all, I have a gridview control with delete asp:ImageButton for each row of the grid. What I would like is for a jquery dialog to pop up when a user clicks the delete button to ask if they are sure they want to delete it. So far I have the dialog coming up just fine, Ive got buttons on that dialog and I can make the buttons call server side methods but its getting the dialog to know the ID of the row that the user has selected and then passing that to the server side code. The button in the page row is currently just an 'a' tag with the id 'dialog_link'. The jquery on the page looks like this: $("button").button(); $("#DeleteButton").click(function () { $.ajax({ type: "POST", url: "ManageUsers.aspx/DeleteUser", data: "{}", contentType: "application/json; charset=utf-8", dataType: "json", success: function (msg) { // Replace the div's content with the page method's return. $("#DeleteButton").text(msg.d); } }); }); // Dialog $('#dialog').dialog({ autoOpen: false, width: 400, modal: true, bgiframe: true }); // Dialog Link $('#dialog_link').click(function () { $('#dialog').dialog('open'); return false; }); The dialog itself is just a set of 'div' tags. Ive thought of lots of different ways of doing this (parameter passing, session variable etc...) but cant figure out how to get any of them working. Any ideas are most welcome As always, thanks in advance to those who contribute.

    Read the article

  • Windows Azure Use Case: Infrastructure Limits

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: Physical hardware components take up room, use electricity, create heat and therefore need cooling, and require wiring and special storage units. all of these requirements cost money to rent at a data-center or to build out at a local facility. In some cases, this can be a catalyst for evaluating options to remove this infrastructure requirement entirely by moving to a distributed computing environment. Implementation: There are three main options for moving to a distributed computing environment. Infrastructure as a Service (IaaS) The first option is simply to virtualize the current hardware and move the VM’s to a provider. You can do this with Microsoft’s Hyper-V product or other software, build the systems and host them locally on fewer physical machines. This is a good option for canned-applications (where you have to type setup.exe) but not as useful for custom applications, as you still have to license and patch those servers, and there are hard limits on the VM sizes. Software as a Service (SaaS) If there is already software available that does what you need, it may make sense to simply purchase not only the software license but the use of it on the vendor’s servers. Microsoft’s Exchange Online is an example of simply using an offering from a vendor on their servers. If you do not need a great deal of customization, have no interest in owning or extending the source code, and need to implement a solution quickly, this is a good choice. Platform as a Service (PaaS) If you do need to write software for your environment, your next choice is a Platform as a Service such as Windows Azure. In this case you no longer manager physical or even virtual servers. You start at the code and data level of control and responsibility, and your focus is more on the design and maintenance of the application itself. In this case you own the source code and can extend or change it as you see fit. An interesting side-benefit to using Windows Azure as a PaaS is that the Application Fabric component allows a hybrid approach, which gives you a basis to allow on-premise applications to leverage distributed computing paradigms. No one solution fits every situation. It’s common to see organizations pick a mixture of on-premise, IaaS, SaaS and PaaS components. In fact, that’s a great advantage to this form of computing - choice. References: 5 Enterprise steps for adopting a Platform as a Service: http://blogs.msdn.com/b/davidmcg/archive/2010/12/02/5-enterprise-steps-for-adopting-a-platform-as-a-service.aspx?wa=wsignin1.0  Application Patterns for the Cloud: http://blogs.msdn.com/b/kashif/archive/2010/08/07/application-patterns-for-the-cloud.aspx

    Read the article

  • Interview with Geoff Bones, developer on SQL Storage Compress

    - by red(at)work
    How did you come to be working at Red Gate? I've been working at Red Gate for nine months; before that I had been at a multinational engineering company. A number of my colleagues had left to work at Red Gate and spoke very highly of it, but I was happy in my role and thought, 'It can't be that great there, surely? They'll be back!' Then one day I visited to catch up them over lunch in the Red Gate canteen. I was so impressed with what I found there, that, three days later, I'd applied for a role as a developer. And how did you get into software development? My first job out of university was working as a systems programmer on IBM mainframes. This was quite a while ago: there was a lot of assembler and loading programs from tape drives and that kind of stuff. I learned a lot about how computers work, and this stood me in good stead when I moved over the development in the 90s. What's the best thing about working as a developer at Red Gate? Where should I start? One of the great things as a developer at Red Gate is the useful feedback and close contact we have with the people who use our products, either directly at trade shows and other events or through information coming through the product managers. The company's whole ethos is built around assisting the user, and this is in big contrast to my previous development roles. We aim to produce tools that people really want to use, that they enjoy using, and, as a developer, this is a great thing to aim for and a great feeling when we get it right. At Red Gate we also try to cut out the things that distract and stop us doing our jobs. As a developer, this means that I can focus on the code and the product I'm working on, knowing that others are doing a first-class job of making sure that the builds are running smoothly and that I'm getting great feedback from the testers. We keep our process light and effective, as we want to produce great software more than we want to produce great audit trails. Tell us a bit about the products you are currently working on. You mean HyperBac? First let me explain a bit about what HyperBac is. At heart it's a compression and encryption technology, but with a few added features that open up a wealth of really exciting possibilities. Right now we have the HyperBac technology in just three products: SQL HyperBac, SQL Virtual Restore and SQL Storage Compress, but we're only starting to develop what it can do. My personal favourite is SQL Virtual Restore; for example, I love the way you can use it to run independent test databases that are all backed by a single compressed backup. I don't think the market yet realises the kind of things you do once you are using these products. On the other hand, the benefits of SQL Storage Compress are straightforward: run your databases but use only 20% of the disk space. Databases are getting larger and larger, and, as they do, so does your ROI. What's a typical day for you? My days are pretty varied. We have our daily team stand-up meeting and then sometimes I will work alone on a current issue, or I'll be pair programming with one of my colleagues. From time to time we give half a day up to future planning with the team, when we look at the long and short term aims for the product and working out the development priorities. I also get to go to conferences and events, which is unusual for a development role and gives me the chance to meet and talk to our customers directly. Have you noticed anything different about developing tools for DBAs rather than other IT kinds of user? It seems to me that DBAs are quite independent minded; they know exactly what the problem they are facing is, and often have a solution in mind before they begin to look for what's on the market. This means that they're likely to cherry-pick tools from a range of vendors, picking the ones that are the best fit for them and that disrupt their environments the least. When I've met with DBAs, I've often been very impressed at their ability to summarise their set up, the issues, the obstacles they face when implementing a tool and their plans for their environment. It's easier to develop products for this audience as they give such a detailed overview of their needs, and I feel I understand their problems.

    Read the article

< Previous Page | 580 581 582 583 584 585 586 587 588 589 590 591  | Next Page >