Search Results

Search found 32223 results on 1289 pages for 'sql 2012'.

Page 883/1289 | < Previous Page | 879 880 881 882 883 884 885 886 887 888 889 890  | Next Page >

  • Oracle Magazine, September/October 2008

    Oracle Magazine September/October features articles on Oracle Universal Content Management, identity management, security, Merrill Lynch and Oracle, ODP.NET, best PL/SQL practices, task flows, Oracle SQL Developer 1.5, Oracle Flashback technology, trigger maintenance and much more.

    Read the article

  • Oracle Magazine - Sep/Oct 2010

    Oracle Magazine Sep/Oct features articles on Oracle Exadata, Database Security, Oracle Enterprise Manager 11g, PL/Scope to analyze your PL/SQL, Using Oracle Essbase Release 11.1.2 Aggregate Storage Option Databases, Oracle Application Express 4.0 Websheets, Oracle Automatic Storage Management disk groups, Tom Kyte revisits a classic, recounts Cardinality Feedback, and remembers SQL*Plus and much more.

    Read the article

  • FocusOPEN Digital Asset Manager

    - by csharp-source.net
    FocusOPEN is a free and open source ASP.NET Digital Asset Management system written in C# and SQL Server (T-SQL). It includes a number of enterprise class features such as a dedicated media processing server, multi-brand support, flexible configurable metadata, faceted and filtered search interfaces (as well as full text indexing) and sophisticated security and user access roles. FocusOPEN is available with an AGPL and Commercial licence.

    Read the article

  • Easy QueryBuilder - A User-Friendly Ad-Hoc Advanced Search Solution

    Constructing an easy and powerful QueryBuilder interface becomes more important for complex data grid filtering and accurate reporting services. In this article, I'll discuss how to build a query search engine using ASP.NET AJAX and dynamic SQL. The main goal is to provide an interactive interface to allow users select query attributes, operators, attribute values, and T-SQL operators so that the data context query list can be easily composed and a search engine is invoked.

    Read the article

  • More Windows 7 Tips and Tricks

    Welcome back to part two of our article series on tips and tricks for Windows 7. Let s continue where we left off with some more ways to maximize your Windows 7 experience.... Microsoft SQL Server? Value Calculator Reduce Costs & Increase Value with Microsoft SQL Server? 2008. Download Today!

    Read the article

  • Comment Déboguer les procédures, fonctions et triggers sous SSMS 2008, par hmira

    hmira nous propose un article qui explique comment déboguer une procédure stockée, une fonction ou un trigger sous SSMS 2008 (SQL Server 2008 Management Studio). Celui-ci décrit la manière de définir des points d'arrêts, faire du pas à pas dans les blocs T-SQL, consulter le contenu des variables locales et variables systèmes @@, etc.. Merci à lui >> Pour plus de détails Vos remarques et suggestions sont les biens venus. A+...

    Read the article

  • i got mysql error on this statement i don't know why [closed]

    - by John Smiith
    i got mysql error on this statement i don't know why error is: #1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'CONSTRAINT fk_objet_code FOREIGN KEY (objet_code) REFERENCES objet(code) ) ENG' at line 6 sql code is CREATE TABLE IF NOT EXISTS `class` ( `numero` int(11) NOT NULL AUTO_INCREMENT, `type_class` varchar(100) DEFAULT NULL, `images` varchar(200) NOT NULL, PRIMARY KEY (`numero`) CONSTRAINT fk_objet_code FOREIGN KEY (objet_code) REFERENCES objet(code) ) ENGINE=InnoDB;;

    Read the article

  • Ad-hoc reporting similar to Microstrategy/Pentaho - is OLAP really the only choice (is OLAP even sufficient)?

    - by TheBeefMightBeTough
    So I'm getting ready to develop an API in Java that will provide all dimensions, metrics, hierarchies, etc to a user such that they can pick and choose what they want (say, e.g., dimensions of Location (a store) and Weekly, and the metric Product Sales $), provide their choices to the api, and have it spit out an object that contains the answer to their question (the object would probably be a set of cells). I don't even believe there will be much drill up/down. The data warehouse the APIwill interface with is in a standard form (FACT tables, dimensions, star schema format). My question is, is an OLAP framework such as Mondrian the only way to achieve something akin to ad-hoc reporting? I can envisage a really large Cube (or VirtualCube) that contains most of the dimensions and metrics the user could ever want, which would give the illusion of ad-hoc reporting. The problem is that there is a ton of setup to do (so much XML) to get the framework to work with the data. Further it requires specific knowledge, such as MDX, and even moreso learning the framework peculiars (Mondrian API). Finally, I am not positive it will scale much better than simply making queries against a SQL database. OLAP to me feels like very old technology. Is performance really an issue anymore? The alternative I can think of would be dynamic SQL. If the existing tables in the data warehouse conform to a naming scheme (FACT_, DIM_, etc), or if a very simple config file/ database table containing config information existed that stored which tables are fact tables, which are dimensions, and what metrics are available, then couldn't the api read from that and assembly the appropriate sql query? Would this necessarily be harder than learning MDX, Mondrian (or another OLAP framework), and creating all the cubes? In general, I feel that OLAP is at the same time too powerful (supports drill up/down, complex functions) and outdated and am reluctant to base my architecture on it. However, I am unsure if the alternative(s), such as rolling my own ad-hoc reporting framework using dynamic SQL would remove any complexity while still fulfilling requirements, both functional and non-functional (e.g., scalability; some FACT tables have many millions of rows). I also wonder about other techniques (e.g., hive). Has anyone here tried to do ad-hoc reporting? Any advice? I expect this project to take a pretty long time (3 months min, but probably longer), so I just do not want to commit to an architecture without being absolutely sure of its pros and cons. Thanks so much.

    Read the article

  • How to stop Office 2010 changing " and ' to smart quotes

    - by fatherjack
    I have recently upgraded to Office 2010 at work and there are a few things that are a real problem for me. As a T-SQL developer and SQL Server DBA I copy and paste code to and from various applications and if Word gets involved it can has disastrous consequences. There is an option that appears to be defaulted to "on" that changes a straight quote to what Word describes as a smart quote - see the image below. Note - the single quote suffers from the same effect. Now, getting to the point that...(read more)

    Read the article

  • XML DATATYPE (series 1)

    New to SQL Server 2005, is The XML data type, which lets you store XML documents and fragments in a SQL Server database. An XML fragment is an XML instance that is missing a single top-level element. You can create columns and variables of the XML type and store XML instances in them. Note that the stored representation of XML data type instances cannot exceed 2 GB.

    Read the article

  • Oracle Magazine, November/December 2008

    Oracle Magazine November/December features articles on our Editors' Choice Awards 2008, the new HP Oracle Database Machine, using task flows, Cursor FOR Loops, Oracle Data Access Components, Oracle Active Data Guard, SQL Developer and PL/SQL constructs, Oracle Database 11g, questions for Tom Kyte and much more.

    Read the article

  • Chapter 3: Data-Tier Applications

    With the release of Microsoft SQL Server 2008 R2, the SQL Server Manageability team addressed these struggles by introducing support for data-tier applications to help streamline the deployment, management, and upgrade of database applications. A data tier application, also referred to as a DAC, is a single unit of deployment that contains all the elements used by an application, such as the database application schema, instance level objects, associated database objects, files and scripts, and even a manifest defining the organization’s deployment requirements.

    Read the article

  • Database Management for SharePoint 2010

    With each revision, SharePoint becomes more a SQL Server Database application, with everything that implies for planning and deployment. There are advantages to this: SharePoint can make use of mirroring, data-compression and remote BLOB storage. It can employ advanced tools such as data file compression, and object-level restore. DBAs can employ familiar techniques to speed SharePoint applications. Bert explains the way that SharePoint and SQL Server interact.

    Read the article

  • InsertItemTemplate FormView Example in ASP.NET 3.5

    Inserting records into an MS SQL Server database from an ASP.NET 3.5 web page is often necessary therefore it s important for many web applications to be able to do this. This article will walk you through creating a real application that can take information from a form built in ASP.NET and put it into an MS SQL Server database.... Charter Business Bundle ? Bundle Services for Only $99/Monthly! Get High Speed Internet and Telephone.

    Read the article

  • Oracle Database 12c ?????(??10?29?)

    - by user763299
    ?????/??: ????????? Oracle Database 12c ?????,??????????????????????????????????,?????????? Oracle ????,???? Oracle ???????????????,???? Database 12c ??????????,??????????????????????????? ????????,???????????? Database 12c ???????????????: ? Oracle SQL Developer ?????????? ? Oracle SQL Developer ??????? ?? Oracle Enterprise Manager ?????????? ??? 2013?10?29? ??? ???????: ?? 13391606778 010-64418595-6056 Email:[email protected] ???? 09:00-09:15 ?? 09:15-10:15 ????:Oracle Database 12c ?? ???? Oracle Database 12c ???????????,???????????????????????????? 10:15-11:15 Oracle????????? 11:15-11:30 ?? 11:30-12:15 ????????? Oracle Database 12c ?? Oracle ?????????????????????????????? 12:15-12:30 ?? 12:30-13:30 ?? 13:30-14:15 ????:? Oracle SQL Developer ????????? ????????????????? - ?????????????? - ??????????? - ???????? 14:15-15:15 ????:? Oracle SQL Developer ???Data Redaction Data Redaction?????????????????: - ??Redaction?? - ??Redaction?? - ????Redaction?? - ??Redaction??????? - ??Redaction?? - ??Redaction?? 15:15-16:00 ????:?? Oracle Enterprise Manager Express ?????????? ??????: - ?? Oracle Enterprise Manager Express - ?? Oracle Enterprise Manager Express ??????? 16:00 ?? ??????????????,??????????????????????? ???? © 2013,??????/????????????? ???? | ????????? | ????

    Read the article

  • CentOS 5.5 : Postfix, Dovecot & MySQL

    - by GruffTech
    I'm hoping someone has seen this issue before because I'm at quite a loss. We're building a new outbound smtp server for our clients that features anti-spam scanning and virus scanning for outbound emails, something we had not previously done. So with CentOS 5.5 x64, Installed and patched completely. Postfix & Dovecot both installed via base repo. [grufftech@outgoing postfix]# rpm -qa | grep postfix postfix-2.3.3-2.1.el5_2 [grufftech@outgoing postfix]# rpm -qa | grep dovecot dovecot-1.0.7-7.el5 [grufftech@outgoing ~]# dovecot --build-options Build options: ioloop=poll notify=inotify ipv6 openssl SQL drivers: mysql postgresql Passdb: checkpassword ldap pam passwd passwd-file shadow sql Userdb: checkpassword ldap passwd prefetch passwd-file sql static /etc/dovecot.conf auth default { mechanisms = plain login digest-md5 cram-md5 passdb sql { args = /etc/dovecot-mysql.conf } userdb sql { args = /etc/dovecot-mysql.conf } userdb prefetch { } user = nobody socket listen { master { path = /var/run/dovecot/auth-master mode = 0660 user = postfix group = postfix } client { path = /var/spool/postfix/private/auth mode = 0660 user = postfix group = postfix } } } All the server is doing is auth for postfix, so no reason to have imap / pop / dict. /etc/dovecot-mysql.conf driver = mysql connect = host=10.0.32.159 dbname=mail user=****** password=******** default_pass_scheme = plain user_query = select 1 password_query = select password from users where username = '%n' and domain = '%d' So drop in my configuration, (which is working on another server identical to this one.) [grufftech@outgoing ~]# /etc/init.d/dovecot start Starting Dovecot Imap: [ OK ] Sweet. Booted up nicely, thats good.... (incoming problem in 3....2....1....) May 21 08:09:01 outgoing dovecot: Dovecot v1.0.7 starting up May 21 08:09:02 outgoing dovecot: auth-worker(default): mysql: Connect failed to 10.0.32.159 (mail): Can't connect to MySQL server on '10.0.32.159' (13) - waiting for 1 seconds before retry well what the crap. went and checked permissions on my MySQL database, and its fine. [grufftech@outgoing ~]# mysql vpopmail -h 10.0.32.159 -u ****** -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 127828558 Server version: 4.1.22 Type 'help;' or '\h' for help. Type '\c' to clear the buffer. mysql>\q So! My server can talk to my database server. but dovecot, for whatever reason, isn't able to. I've fiddled with it for the last six hours, grabbed slightly-older copies of the RPM (ones that matched our production server exactly) to test those, copied configs, searched google, searched server fault, chatted in IRC, banged my head against the table, I've done it all. Surely I'm doing something wrong or forgetting something, can anyone tell me what the elephant in the room is? This stuff is supposed to work.

    Read the article

  • How to make Horde connect to mysql with UTF-8 character set?

    - by jkj
    How to tell horde 3.3.11 to use UTF-8 for it's mysql connection? The $conf['sql']['charset'] only tells horde what is expected from the database. Horde uses MDB2 to connect to mysql. Is there way to force MDB2 or mysql character_set_client from php.ini? So far I found two workarounds: Force mysql to ignore character set requested by client [mysqld] skip-character-set-client-handshake=1 default-character-set=utf8 Force mysql to run SET NAMES utf8 on every connection [mysqld] init-connect='SET NAMES utf8' Both have drawbacks on multi user mysql server. The first disables converting character sets alltogether and the second one forces every connection to produce UTF-8. [EDIT] Found the problem. The 'charset' parameter was unset the last minute before sending to SQL backend. This is probably due to mysql not being able to digest utf-8 but utf8. Mysql specific mapping is required to make it work. I just worked around it by translating utf-8 - utf8. Won't work with any other databases with this patch though. --- lib/Horde/Share/sql.php.orig 2011-07-04 17:09:33.349334890 +0300 +++ lib/Horde/Share/sql.php 2011-07-04 17:11:06.238636462 +0300 @@ -753,7 +753,13 @@ /* Connect to the sql server using the supplied parameters. */ require_once 'MDB2.php'; $params = $this->_params; - unset($params['charset']); + + if ($params['charset'] == 'utf-8') { + $params['charset'] = 'utf8'; + } else { + unset($params['charset']); + } + $this->_write_db = &MDB2::factory($params); if (is_a($this->_write_db, 'PEAR_Error')) { Horde::fatal($this->_write_db, __FILE__, __LINE__); @@ -792,7 +798,13 @@ /* Check if we need to set up the read DB connection seperately. */ if (!empty($this->_params['splitread'])) { $params = array_merge($params, $this->_params['read']); - unset($params['charset']); + + if ($params['charset'] == 'utf-8') { + $params['charset'] = 'utf8'; + } else { + unset($params['charset']); + } + $this->_db = &MDB2::singleton($params); if (is_a($this->_db, 'PEAR_Error')) { Horde::fatal($this->_db, __FILE__, __LINE__);

    Read the article

  • Hello it’s your server calling

    - by GrumpyOldDBA
    This is nothing exciting but I've always found this startup procedure  very useful. All this simple procedure does is send you an email if the SQL Service Starts. If your Server is a cluster it will tell you which node you're on. -- On it's own this procedure can't actually be used as I route the output through another procedure, dbasp_SendMessage, this procedure routes a passed message to either a smtp email or a log table or both, the destination is set in a server config table...(read more)

    Read the article

  • Practical Performance Monitoring and Tuning Event

    - by Andrew Kelly
      For any of you who may be interested or know of someone in the market for a performance Monitoring and Tuning class I have just the ticket for you. It’s a 3 day event that will be held in Atlanta Ga. on January 25th to the 27th 2011. For those of you that know me or have been to my sessions you realize I like to provide more than just classroom theory and like to teach real world and above all practical methodology when it comes to performance in SQL Server. This class covers all the essentials...(read more)

    Read the article

  • Replication Services in a BI environment

    - by jorg
    In this blog post I will explain the principles of SQL Server Replication Services without too much detail and I will take a look on the BI capabilities that Replication Services could offer in my opinion. SQL Server Replication Services provides tools to copy and distribute database objects from one database system to another and maintain consistency afterwards. These tools basically copy or synchronize data with little or no transformations, they do not offer capabilities to transform data or apply business rules, like ETL tools do. The only “transformations” Replication Services offers is to filter records or columns out of your data set. You can achieve this by selecting the desired columns of a table and/or by using WHERE statements like this: SELECT <published_columns> FROM [Table] WHERE [DateTime] >= getdate() - 60 There are three types of replication: Transactional Replication This type replicates data on a transactional level. The Log Reader Agent reads directly on the transaction log of the source database (Publisher) and clones the transactions to the Distribution Database (Distributor), this database acts as a queue for the destination database (Subscriber). Next, the Distribution Agent moves the cloned transactions that are stored in the Distribution Database to the Subscriber. The Distribution Agent can either run at scheduled intervals or continuously which offers near real-time replication of data! So for example when a user executes an UPDATE statement on one or multiple records in the publisher database, this transaction (not the data itself) is copied to the distribution database and is then also executed on the subscriber. When the Distribution Agent is set to run continuously this process runs all the time and transactions on the publisher are replicated in small batches (near real-time), when it runs on scheduled intervals it executes larger batches of transactions, but the idea is the same. Snapshot Replication This type of replication makes an initial copy of database objects that need to be replicated, this includes the schemas and the data itself. All types of replication must start with a snapshot of the database objects from the Publisher to initialize the Subscriber. Transactional replication need an initial snapshot of the replicated publisher tables/objects to run its cloned transactions on and maintain consistency. The Snapshot Agent copies the schemas of the tables that will be replicated to files that will be stored in the Snapshot Folder which is a normal folder on the file system. When all the schemas are ready, the data itself will be copied from the Publisher to the snapshot folder. The snapshot is generated as a set of bulk copy program (BCP) files. Next, the Distribution Agent moves the snapshot to the Subscriber, if necessary it applies schema changes first and copies the data itself afterwards. The application of schema changes to the Subscriber is a nice feature, when you change the schema of the Publisher with, for example, an ALTER TABLE statement, that change is propagated by default to the Subscriber(s). Merge Replication Merge replication is typically used in server-to-client environments, for example when subscribers need to receive data, make changes offline, and later synchronize changes with the Publisher and other Subscribers, like with mobile devices that need to synchronize one in a while. Because I don’t really see BI capabilities here, I will not explain this type of replication any further. Replication Services in a BI environment Transactional Replication can be very useful in BI environments. In my opinion you never want to see users to run custom (SSRS) reports or PowerPivot solutions directly on your production database, it can slow down the system and can cause deadlocks in the database which can cause errors. Transactional Replication can offer a read-only, near real-time database for reporting purposes with minimal overhead on the source system. Snapshot Replication can also be useful in BI environments, if you don’t need a near real-time copy of the database, you can choose to use this form of replication. Next to an alternative for Transactional Replication it can be used to stage data so it can be transformed and moved into the data warehousing environment afterwards. In many solutions I have seen developers create multiple SSIS packages that simply copies data from one or more source systems to a staging database that figures as source for the ETL process. The creation of these packages takes a lot of (boring) time, while Replication Services can do the same in minutes. It is possible to filter out columns and/or records and it can even apply schema changes automatically so I think it offers enough features here. I don’t know how the performance will be and if it really works as good for this purpose as I expect, but I want to try this out soon!

    Read the article

  • Design Book–Fourth(last) Section (Physical Abstraction Optimization)

    - by drsql
    In this last section of the book, we will shift focus to the physical abstraction layer optimization. By this I mean the little bits and pieces of the design that is specifically there for performance and are actually part of the relational engine (read: the part of the SQL Server experience that ideally is hidden from you completely, but in 2010 reality it isn’t quite so yet.  This includes all of the data structures like database, files, etc; the optimizer; some coding, etc. In my mind, this...(read more)

    Read the article

  • Auto blocking attacking IP address

    - by dong
    This is to share my PowerShell code online. I original asked this question on MSDN forum (or TechNet?) here: http://social.technet.microsoft.com/Forums/en-US/winserversecurity/thread/f950686e-e3f8-4cf2-b8ec-2685c1ed7a77 In short, this is trying to find attacking IP address then add it into Firewall block rule. So I suppose: 1, You are running a Windows Server 2008 facing the Internet. 2, You need to have some port open for service, e.g. TCP 21 for FTP; TCP 3389 for Remote Desktop. You can see in my code I’m only dealing with these two since that’s what I opened. You can add further port number if you like, but the way to process might be different with these two. 3, I strongly suggest you use STRONG password and follow all security best practices, this ps1 code is NOT for adding security to your server, but reduce the nuisance from brute force attack, and make sys admin’s life easier: i.e. your FTP log won’t hold megabytes of nonsense, your Windows system log will not roll back and only can tell you what happened last month. 4, You are comfortable with setting up Windows Firewall rules, in my code, my rule has a name of “MY BLACKLIST”, you need to setup a similar one, and set it to BLOCK everything. 5, My rule is dangerous because it has the risk to block myself out as well. I do have a backup plan i.e. the DELL DRAC5 so that if that happens, I still can remote console to my server and reset the firewall. 6, By no means the code is perfect, the coding style, the use of PowerShell skills, the hard coded part, all can be improved, it’s just that it’s good enough for me already. It has been running on my server for more than 7 MONTHS. 7, Current code still has problem, I didn’t solve it yet, further on this point after the code. :)    #Dong Xie, March 2012  #my simple code to monitor attack and deal with it  #Windows Server 2008 Logon Type  #8: NetworkCleartext, i.e. FTP  #10: RemoteInteractive, i.e. RDP    $tick = 0;  "Start to run at: " + (get-date);    $regex1 = [regex] "192\.168\.100\.(?:101|102):3389\s+(\d+\.\d+\.\d+\.\d+)";  $regex2 = [regex] "Source Network Address:\t(\d+\.\d+\.\d+\.\d+)";    while($True) {   $blacklist = @();     "Running... (tick:" + $tick + ")"; $tick+=1;    #Port 3389  $a = @()  netstat -no | Select-String ":3389" | ? { $m = $regex1.Match($_); `    $ip = $m.Groups[1].Value; if ($m.Success -and $ip -ne "10.0.0.1") {$a = $a + $ip;} }  if ($a.count -gt 0) {    $ips = get-eventlog Security -Newest 1000 | Where-Object {$_.EventID -eq 4625 -and $_.Message -match "Logon Type:\s+10"} | foreach { `      $m = $regex2.Match($_.Message); $ip = $m.Groups[1].Value; $ip; } | Sort-Object | Tee-Object -Variable list | Get-Unique    foreach ($ip in $a) { if ($ips -contains $ip) {      if (-not ($blacklist -contains $ip)) {        $attack_count = ($list | Select-String $ip -SimpleMatch | Measure-Object).count;        "Found attacking IP on 3389: " + $ip + ", with count: " + $attack_count;        if ($attack_count -ge 20) {$blacklist = $blacklist + $ip;}      }      }    }  }      #FTP  $now = (Get-Date).AddMinutes(-5); #check only last 5 mins.     #Get-EventLog has built-in switch for EventID, Message, Time, etc. but using any of these it will be VERY slow.  $count = (Get-EventLog Security -Newest 1000 | Where-Object {$_.EventID -eq 4625 -and $_.Message -match "Logon Type:\s+8" -and `              $_.TimeGenerated.CompareTo($now) -gt 0} | Measure-Object).count;  if ($count -gt 50) #threshold  {     $ips = @();     $ips1 = dir "C:\inetpub\logs\LogFiles\FPTSVC2" | Sort-Object -Property LastWriteTime -Descending `       | select -First 1 | gc | select -Last 200 | where {$_ -match "An\+error\+occured\+during\+the\+authentication\+process."} `        | Select-String -Pattern "(\d+\.\d+\.\d+\.\d+)" | select -ExpandProperty Matches | select -ExpandProperty value | Group-Object `        | where {$_.Count -ge 10} | select -ExpandProperty Name;       $ips2 = dir "C:\inetpub\logs\LogFiles\FTPSVC3" | Sort-Object -Property LastWriteTime -Descending `       | select -First 1 | gc | select -Last 200 | where {$_ -match "An\+error\+occured\+during\+the\+authentication\+process."} `        | Select-String -Pattern "(\d+\.\d+\.\d+\.\d+)" | select -ExpandProperty Matches | select -ExpandProperty value | Group-Object `        | where {$_.Count -ge 10} | select -ExpandProperty Name;     $ips += $ips1; $ips += $ips2; $ips = $ips | where {$_ -ne "10.0.0.1"} | Sort-Object | Get-Unique;         foreach ($ip in $ips) {       if (-not ($blacklist -contains $ip)) {        "Found attacking IP on FTP: " + $ip;        $blacklist = $blacklist + $ip;       }     }  }        #Firewall change <# $current = (netsh advfirewall firewall show rule name="MY BLACKLIST" | where {$_ -match "RemoteIP"}).replace("RemoteIP:", "").replace(" ","").replace("/255.255.255.255",""); #inside $current there is no \r or \n need remove. foreach ($ip in $blacklist) { if (-not ($current -match $ip) -and -not ($ip -like "10.0.0.*")) {"Adding this IP into firewall blocklist: " + $ip; $c= 'netsh advfirewall firewall set rule name="MY BLACKLIST" new RemoteIP="{0},{1}"' -f $ip, $current; Invoke-Expression $c; } } #>    foreach ($ip in $blacklist) {    $fw=New-object –comObject HNetCfg.FwPolicy2; # http://blogs.technet.com/b/jamesone/archive/2009/02/18/how-to-manage-the-windows-firewall-settings-with-powershell.aspx    $myrule = $fw.Rules | where {$_.Name -eq "MY BLACKLIST"} | select -First 1; # Potential bug here?    if (-not ($myrule.RemoteAddresses -match $ip) -and -not ($ip -like "10.0.0.*"))      {"Adding this IP into firewall blocklist: " + $ip;         $myrule.RemoteAddresses+=(","+$ip);      }  }    Wait-Event -Timeout 30 #pause 30 secs    } # end of top while loop.   Further points: 1, I suppose the server is listening on port 3389 on server IP: 192.168.100.101 and 192.168.100.102, you need to replace that with your real IP. 2, I suppose you are Remote Desktop to this server from a workstation with IP: 10.0.0.1. Please replace as well. 3, The threshold for 3389 attack is 20, you don’t want to block yourself just because you typed your password wrong 3 times, you can change this threshold by your own reasoning. 4, FTP is checking the log for attack only to the last 5 mins, you can change that as well. 5, I suppose the server is serving FTP on both IP address and their LOG path are C:\inetpub\logs\LogFiles\FPTSVC2 and C:\inetpub\logs\LogFiles\FPTSVC3. Change accordingly. 6, FTP checking code is only asking for the last 200 lines of log, and the threshold is 10, change as you wish. 7, the code runs in a loop, you can set the loop time at the last line. To run this code, copy and paste to your editor, finish all the editing, get it to your server, and open an CMD window, then type powershell.exe –file your_powershell_file_name.ps1, it will start running, you can Ctrl-C to break it. This is what you see when it’s running: This is when it detected attack and adding the firewall rule: Regarding the design of the code: 1, There are many ways you can detect the attack, but to add an IP into a block rule is no small thing, you need to think hard before doing it, reason for that may include: You don’t want block yourself; and not blocking your customer/user, i.e. the good guy. 2, Thus for each service/port, I double check. For 3389, first it needs to show in netstat.exe, then the Event log; for FTP, first check the Event log, then the FTP log files. 3, At three places I need to make sure I’m not adding myself into the block rule. –ne with single IP, –like with subnet.   Now the final bit: 1, The code will stop working after a while (depends on how busy you are attacked, could be weeks, months, or days?!) It will throw Red error message in CMD, don’t Panic, it does no harm, but it also no longer blocking new attack. THE REASON is not confirmed with MS people: the COM object to manage firewall, you can only give it a list of IP addresses to the length of around 32KB I think, once it reaches the limit, you get the error message. 2, This is in fact my second solution to use the COM object, the first solution is still in the comment block for your reference, which is using netsh, that fails because being run from CMD, you can only throw it a list of IP to 8KB. 3, I haven’t worked the workaround yet, some ideas include: wrap that RemoteAddresses setting line with error checking and once it reaches the limit, use the newly detected IP to be the list, not appending to it. This basically reset your block rule to ground zero and lose the previous bad IPs. This does no harm as it sounds, because given a certain period has passed, any these bad IPs still not repent and continue the attack to you, it only got 30 seconds or 20 guesses of your password before you block it again. And there is the benefit that the bad IP may turn back to the good hands again, and you are not blocking a potential customer or your CEO’s home pc because once upon a time, it’s a zombie. Thus the ZEN of blocking: never block any IP for too long. 4, But if you insist to block the ugly forever, my other ideas include: You call MS support, ask them how can we set an arbitrary length of IP addresses in a rule; at least from my experiences at the Forum, they don’t know and they don’t care, because they think the dynamic blocking should be done by some expensive hardware. Or, from programming perspective, you can create a new rule once the old is full, then you’ll have MY BLACKLIST1, MY  BLACKLIST2, MY BLACKLIST3, … etc. Once in a while you can compile them together and start a business to sell your blacklist on the market! Enjoy the code! p.s. (PowerShell is REALLY REALLY GREAT!)

    Read the article

  • Deck from London UG 20110616 - Building a Reporting Brick capable of 1.2GBytes/sec and 80K IOs/sec for less than £2K

    - by tonyrogerson
    The Reporting Brick concept is not really anything new, it starts the walk toward bringing the work Jim Gray and Tom Barclay et al did on CyberBricks up-to-date in terms of current kit. A reporting brick is simply a box built from commodity kit utilising commodity SSD, namely the OCZ IBIS drives to gain extremely high levels of performance for a fraction of the cost required for typical server and san installs today. I'll write up over the next few months as I work further on the concept, for now the deck attached summarises some of the ideas around it, the deck was presented at last nights London SQL Server User Group, I will be presenting it again in Edinburgh on the 29th June and other locations later in the year. Deck: Commodity Kit.pptx  

    Read the article

  • Extended Blogs – Now cross posting to Sqlblog.com

    - by extended_events
    Thanks to some help from SQL MVP Adam Mechanic, the Extended Events blog is now being cross posted on SQLblog.com. Except for the first two posts (welcome message & Reading event data 101), these blogs will be identical so read whichever site you prefer, either the SQLblog version, or the original MSDN blog. - Mike Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

< Previous Page | 879 880 881 882 883 884 885 886 887 888 889 890  | Next Page >