Search Results

Search found 176 results on 8 pages for 'wes curtis'.

Page 2/8 | < Previous Page | 1 2 3 4 5 6 7 8  | Next Page >

  • Refactoring Part 1 : Intuitive Investments

    - by Wes McClure
    Fear, it’s what turns maintaining applications into a nightmare.  Technology moves on, teams move on, someone is left to operate the application, what was green is now perceived brown.  Eventually the business will evolve and changes will need to be made.  The approach to those changes often dictates the long term viability of the application.  Fear of change, lack of passion and a lack of interest in understanding the domain often leads to a paranoia to do anything that doesn’t involve duct tape and bailing twine.  Don’t get me wrong, those have a place in the short term viability of a project but they don’t have a place in the long term.  Add to it “us versus them” in regards to the original team and those that maintain it, internal politics and other factors and you have a recipe for disaster.  This results in code that quickly becomes unmanageable.  Even the most clever of designs will eventually become sub optimal and debt will amount that exponentially makes changes difficult.  This is where refactoring comes in, and it’s something I’m very passionate about.  Refactoring is about improving the process whereby we make change, it’s an exponential investment in the process of change. Without it we will incur exponential complexity that halts productivity. Investments, especially in the long term, require intuition and reflection.  How can we tackle new development effectively via evolving the original design and paying off debt that has been incurred? The longer we wait to ask and answer this question, the more it will cost us.  Small requests don’t warrant big changes, but realizing when changes now will pay off in the long term, and especially in the short term, is valuable. I have done my fair share of maintaining applications and continuously refactoring as needed, but recently I’ve begun work on a project that hasn’t had much debt, if any, paid down in years.  This is the first in a series of blog posts to try to capture the process which is largely driven by intuition of smaller refactorings from other projects. Signs that refactoring could help: Testability How can decreasing test time not pay dividends? One of the first things I found was that a very important piece often takes 30+ minutes to test.  I can only imagine how much time this has cost historically, but more importantly the time it might cost in the coming weeks: I estimate at least 10-20 hours per person!  This is simply unacceptable for almost any situation.  As it turns out, about 6 hours of working with this part of the application and I was able to cut the time down to under 30 seconds!  In less than the lost time of one week, I was able to fix the problem for all future weeks! If we can’t test fast then we can’t change fast, nor with confidence. Code is used by end users and it’s also used by developers, consider your own needs in terms of the code base.  Adding logic to enable/disable features during testing can help decouple parts of an application and lead to massive improvements.  What exactly is so wrong about test code in real code?  Often, these become features for operators and sometimes end users.  If you cannot run an integration test within a test runner in your IDE, it’s time to refactor. Readability Are variables named meaningfully via a ubiquitous language? Is the code segmented functionally or behaviorally so as to minimize the complexity of any one area? Are aspects properly segmented to avoid confusion (security, logging, transactions, translations, dependency management etc) Is the code declarative (what) or imperative (how)?  What matters, not how.  LINQ is a great abstraction of the what, not how, of collection manipulation.  The Reactive framework is a great example of the what, not how, of managing streams of data. Are constants abstracted and named, or are they just inline? Do people constantly bitch about the code/design? If the code is hard to understand, it will be hard to change with confidence.  It’s a large undertaking if the original designers didn’t pay much attention to readability and as such will never be done to “completion.”  Make sure not to go over board, instead use this as you change an application, not in lieu of changes (like with testability). Complexity Simplicity will never be achieved, it’s highly subjective.  That said, a lot of code can be significantly simplified, tidy it up as you go.  Refactoring will often converge upon a simplification step after enough time, keep an eye out for this. Understandability In the process of changing code, one often gains a better understanding of it.  Refactoring code is a good way to learn how it works.  However, it’s usually best in combination with other reasons, in effect killing two birds with one stone.  Often this is done when readability is poor, in which case understandability is usually poor as well.  In the large undertaking we are making with this legacy application, we will be replacing it.  Therefore, understanding all of its features is important and this refactoring technique will come in very handy. Unused code How can deleting things not help? This is a freebie in refactoring, it’s very easy to detect with modern tools, especially in statically typed languages.  We have VCS for a reason, if in doubt, delete it out (ok that was cheesy)! If you don’t know where to start when refactoring, this is an excellent starting point! Duplication Do not pray and sacrifice to the anti-duplication gods, there are excellent examples where consolidated code is a horrible idea, usually with divergent domains.  That said, mediocre developers live by copy/paste.  Other times features converge and aren’t combined.  Tools for finding similar code are great in the example of copy/paste problems.  Knowledge of the domain helps identify convergent concepts that often lead to convergent solutions and will give intuition for where to look for conceptual repetition. 80/20 and the Boy Scouts It’s often said that 80% of the time 20% of the application is used most.  These tend to be the parts that are changed.  There are also parts of the code where 80% of the time is spent changing 20% (probably for all the refactoring smells above).  I focus on these areas any time I make a change and follow the philosophy of the Boy Scout in cleaning up more than I messed up.  If I spend 2 hours changing an application, in the 20%, I’ll always spend at least 15 minutes cleaning it or nearby areas. This gives a huge productivity edge on developers that don’t. Ironically after a short period of time the 20% shrinks enough that we don’t have to spend 80% of our time there and can move on to other areas.   Refactoring is highly subjective, never attempt to refactor to completion!  Learn to be comfortable with leaving one part of the application in a better state than others.  It’s an evolution, not a revolution.  These are some simple areas to look into when making changes and can help get one started in the process.  I’ve often found that refactoring is a convergent process towards simplicity that sometimes spans a few hours but often can lead to massive simplifications over the timespan of weeks and months of regular development.

    Read the article

  • Changes in Language Punctuation [closed]

    - by Wes Miller
    More social curiosity than actual programming question... (I got shot for posting this on Stack Overflow. They sent me here. At least i hope here is where they meant.) Based on the few responses I got before the content police ran me off Stack Overflow, I should note that I am legally blind and neatness and consistency in programming are my best friends. A thousand years ago when I took my first programming class (Fortran 66) and a mere 500 years ago when I tokk my first C and C++ classes, there were some pretty standard punctuation practices across languages. I saw them in Basic (shudder), PL/1, PL/AS, Rexx even Pascal. Ok, APL2 is not part of this discussion. Each language has its own peculiar punctuation. Pascal's periods, Fortran's comma separated do loops, almost everybody else's semicolons. As I learned it, each language also has KEYWORDS (if, for, do, while, until, etc.) which are set off by whitespace (or the left margin) if, etc. Each language has function, subroutines of whatever they're called. Some built-in some user coded. They were set off by function_name( parameters );. As in sqrt( x ) or rand( y ); Lately, there seems to be a new set of punctuation rules. Especially in c++ where initializers get glued onto the end of variable declarations int x(0); or auto_ptr p(new gizmo); This usually, briefly fools me into thinking someone is declaring a function prototype or using a function as a integer. Then "if" and 'for' seems to have grown parens; if(true) for(;;), etc. Since when did keywords become functions. I realize some people think they ARE functions with iterators as parameters. But if "for" is a function, where did the arg separating commas go? And finally, functions seem to have shed their parens; sqrt (2) select (...) I know, I koow, loosening whitespace rules is good. Keep reading. Question: when did the old ways disappear and this new way come into vogue? Does anyone besides me find it irritating to read and that the information that the placement of punctuation used to convey is gone? I know full well that K&R put the { at the end of the "if" or "for" to save a byte here and there. Can't use that excuse here. Space as an excuse for loss of readability died as HDD space soared past 100 MiB. Your thoughts are solicited. If there is a good reason to do this, I'll gladly learn it and maybe in another 50 years I'll get used to it. Of course it's good that compilers recognize these (IMHO) typos and keep right on going, but just because you CAN code it that way doesn't mean you HAVE to, right?

    Read the article

  • Can not resolve hostnames on Xubuntu computers

    - by P Curtis
    I have a network of computers which has been running for many years. I have changed two of those to Xubuntu 11.10 and found I can no longer connect by ssh using the host-name from any other machine. I can connect and ping by IP although ping is very slow in one case (~200ms). All other machines are fine including another with Ubuntu 11.10. Host-name resolution works from Xubuntu machines to other networked machines. I am using wins resolution and have checked settings in /etc/nsswitch.conf are the same as my working Ubuntu systems. What is different in Xubuntu networking that I might have missed?

    Read the article

  • Get the following error when running Software Updater

    - by Curtis Cox
    W:Failed to fetch cdrom://Ubuntu 12.10 Quantal Quetzal - Beta i386 (20120926)/dists/quantal/main/binary-i386/Packages Please use apt-cdrom to make this CD-ROM recognized by APT. apt-get update cannot be used to add new CD-ROMs , W:Failed to fetch cdrom://Ubuntu 12.10 Quantal Quetzal - Beta i386 (20120926)/dists/quantal/restricted/binary-i386/Packages Please use apt-cdrom to make this CD-ROM recognized by APT. apt-get update cannot be used to add new CD-ROMs , E:Some index files failed to download. They have been ignored, or old ones used instead.

    Read the article

  • how do I get dual monitors to work properly in Ubuntu 11.10 on a Dell Latitude D630?

    - by wes cook
    I have spent a lot of time trying to get dual monitors to work on Ubuntu 11.10 on my Dell Latitude D630 (nVidia NVS 135m video card). - For starters, the System Displays settings app always only showed one unknown monitor, even though I had the external Acer monitor connected. - So I downloaded and installed the nVidia drivers. According to what I read I would need to only use the nVidia driver app (nVidia X Server Settings), so that's what I've done. (System Displays settings continued to only show a single monitor anyway). - nVidia settings app only showed on monitor until I changed the BIOS setting to use the onboard video for external monitor (not the dock video, which it was set to, even though I don't have a docking station). - The nVidia setting app now recognized both monitors. So, I setup the X Server display config as Separate X screen for both monitors. My laptop screen shows up as AUO 1440x900 and my external monitor as Acer E211H 1920x1080. - Everything seemed like it would work, but the external monitor was just a complete white screen. The external monitor was non-functional, even though sometimes it would show the background image - still nothing would show up over there. - So, I checked the Enable Xinerama box. - Now, after logging out and back in, the wallpaper extends to both screens but I get no taskbar at the bottom or top, no system menus, and I have to press the power button to restart or log off. - After experimenting with all the shells, the only one that shows the menus and taskbars when I log in is Gnome Classic. - This is pretty much the same symptoms as found here: How do I fix 11.10 GUI?. - So, I resign myself to the older shell. - Everything works fine until ... I unplug the external monitor ... this is a laptop after all. - Anyway, after doing some work on the road, I plug back in and I still see both screens and it's functional except, ... - Now, the laptop screen (with the taskbar and menu bar) has 4 black bars at the top that windows cannot cover. The top bar is the menu bar (with Applications, Places, the date and time and the system menu on the right). But the next 3 bars (the same height as the top menu bar) are empty and are just reducing the max size of windows on that screen. - See screenshot here: http://i39.tinypic.com/35d2kh1.png - So ... 1. How do I get rid of those extra 3 black bars? They're taking valuable screen space. 2. (less critical) How do I successfully use both screens in the Ubuntu or Ubuntu 2D shell?

    Read the article

  • Recording Available: What's New in ETPM v2.3.0?

    - by Wes Curtis
    Our team has published recordings for 'What's New in ETPM v2.3.1?' as well as overviews of features in a number of functional areas. Partners and customers who are considering implementing on or upgrading to recent versions like 2.3.1 have asked for a similar overview of the features available in ETPM v2.3.0 so they have a more complete view of what has been recently released. The What's New in ETPM v2.3.0? recording presents an overview of the features delivered in the ETPM v2.3.0 release. This recording was conducted in an ETPM v2.3.1 environment but the content focuses solely on those features new to ETPM v2.3.0.    

    Read the article

  • Oracle Policy Automation YouTube Videos

    - by Wes Curtis
    The Oracle PSRM integration with Oracle Policy Automation provides a great option for implementing business rules as Microsoft Word and Excel documents. The following YouTube site includes a large number of videos on various OPA topics including feature introductions, tutorials and overview presentations. Be sure to check these out if you would like to learn more about OPA and it's capabilities. http://www.youtube.com/user/OraclePAVideos

    Read the article

  • My Automated NuGet Workflow

    - by Wes McClure
    When we develop libraries (whether internal or public), it helps to have a rapid ability to make changes and test them in a consuming application. Building Setup the library with automatic versioning and a nuspec Setup library assembly version to auto increment build and revision AssemblyInfo –> [assembly: AssemblyVersion("1.0.*")] This autoincrements build and revision based on time of build Major & Minor Major should be changed when you have breaking changes Minor should be changed once you have a solid new release During development I don’t increment these Create a nuspec, version this with the code nuspec - set version to <version>$version$</version> This uses the assembly’s version, which is auto-incrementing Make changes to code Run automated build (ruby/rake) run “rake nuget” nuget task builds nuget package and copies it to a local nuget feed I use an environment variable to point at this so I can change it on a machine level! The nuget command below assumes a nuspec is checked in called Library.nuspec next to the csproj file $projectSolution = 'src\\Library.sln' $nugetFeedPath = ENV["NuGetDevFeed"] msbuild :build => [:clean] do |msb| msb.properties :configuration => :Release msb.targets :Build msb.solution = $projectSolution end task :nuget => [:build] do sh "nuget pack src\\Library\\Library.csproj /OutputDirectory " + $nugetFeedPath end Setup the local nuget feed as a nuget package source (this is only required once per machine) Go to the consuming project Update the package Update-Package Library or Install-Package TLDR change library code run “rake nuget” run “Update-Package library” in the consuming application build/test! If you manually execute any of this process, especially copying files, you will find it a burden to develop the library and will find yourself dreading it, and even worse, making changes downstream instead of updating the shared library for everyone’s sake. Publishing Once you have a set of changes that you want to release, consider versioning and possibly increment the minor version if needed. Pick the package out of your local feed, and copy it to a public / shared feed! I have a script to do this where I can drop the package on a batch file Replace apikey with your nuget feed's apikey Take out the confirm(s) if you don't want them @ECHO off echo Upload %1? set /P anykey="Hit enter to continue " nuget push %1 apikey set /P anykey="Done " Note: helps to prune all the unnecessary versions during testing from your local feed once you are done and ready to publish TLDR consider version number run command to copy to public feed

    Read the article

  • Why would 70-persistent-net.rules have no effect?

    - by Wes Felter
    I've got a saucy server with a lot of NICs and they end up with weird names like "rename19". I know interface names can be changed by modifying the /etc/udev/rules.d/70-persistent-net.rules file. The first clue that something is wrong is that that file did not exist even though it's supposed to be created automatically. So I decided to write my own based on advice from Linux From Scratch: ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:06:00.0", NAME="eth0" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:06:00.1", NAME="eth1" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:06:00.2", NAME="eth2" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:06:00.3", NAME="eth3" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:0c:00.0", NAME="mezz0" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:0c:00.1", NAME="mezz1" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:1b:00.0", NAME="slot1a" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:1b:00.1", NAME="slot1b" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:20:00.0", NAME="slot2a" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:20:00.1", NAME="slot2b" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:11:00.0", NAME="slot3a" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:11:00.1", NAME="slot3b" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:8b:00.0", NAME="slot4a" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:8b:00.1", NAME="slot4b" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:90:00.0", NAME="slot5a" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:90:00.1", NAME="slot5b" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:95:00.0", NAME="slot6a" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:95:00.1", NAME="slot6b" (I'm matching on PCI IDs instead of MAC addresses because I have multiple identical machines that I want to apply this configuration to.) After rebooting, nothing has changed. It's like these rules aren't even being read. There's not much going on in dmesg either: $ dmesg | grep udev [ 3.196629] systemd-udevd[323]: starting version 204 [ 6.719140] systemd-udevd[550]: starting version 204 [ 38.695050] init: udev-fallback-graphics main process (1658) terminated with status 1

    Read the article

  • whats the name of this pattern?

    - by Wes
    I see this a lot in frameworks. You have a master class which other classes register with. The master class then decides which of the registered classes to delegate the request to. An example based passed in class may be something this. public interface Processor { public boolean canHandle(Object objectToHandle); public void handle(Object objectToHandle); } public class EvenNumberProcessor extends Processor { public boolean canHandle(Object objectToHandle) { if (!isNumeric(objectToHandle)){ return false } return isEven(objectToHandle); } public void handle(objectToHandle) { //Optionally call canHandleAgain to ensure the calling class is fufilling its contract doSomething(); } } public class OddNumberProcessor extends Processor { public boolean canHandle(Object objectToHandle) { if (!isNumeric(objectToHandle)){ return false } return isOdd(objectToHandle); } public void handle(objectToHandle) { //Optionally call canHandleAgain to ensure the calling class is fufilling its contract doSomething(); } } //Can optionally implement processor interface public class processorDelegator { private List processors; public void addProcessor(Processor processor) { processors.add(processor); } public void process(Object objectToProcess) { //Lookup relevant processor either by keeping a list of what they can process //Or query each one to see if it can process the object. chosenProcessor=chooseProcessor(objectToProcess); chosenProcessor.handle(objectToProcess); } } Note there are a few variations I see on this. In one variation the sub classes provide a list of things they can process which the ProcessorDelegator understands. The other variation which is listed above in fake code is where each is queried in turn. This is similar to chain of command but I don't think its the same as chain of command means that the processor needs to pass to other processors. The other variation is where the ProcessorDelegator itself implements the interface which means you can get trees of ProcessorDelegators which specialise further. In the above example you could have a numeric processor delegator which delegates to an even/odd processor and a string processordelegator which delegates to different strings. My question is does this pattern have a name.

    Read the article

  • Why is git-svn useful?

    - by Wes
    I have read these related questions: I'm a Subversion geek, why should I consider or not consider Mercurial or Git or any other DVCS? git for personal (one-man) projects. Overkill? ...and I understand why git is useful. What I don't understand is why tools like git-svn that allow git to integrate with svn are useful. When, for example, a team is working with svn, or any other centralised SCM, why would a member of the team opt to use git-svn? Are there any practical advantages for a developer that has to synchronize with a centralized repository?

    Read the article

  • How should I manage my many-to-many relationships?

    - by wes
    Hello all, I have a database containing a couple tables: files and users. This relationship is many-to-many, so I also have a table called users_files_ref which holds foreign keys to both of the above tables. Here's the schema of each table: files - file_id, file_name users - user_id, user_name users_files_ref - user_file_ref_id, user_id, file_id I'm using Codeigniter to build a file host application, and I'm right in the middle of adding the functionality that enables users to upload files. This is where I'm running into my problem. Once I add a file to the files table, I will need that new file's id to update the users_files_ref table. Right now I'm adding the record to the files table, and then I imagined I'd run a query to grab the last file added, so that I can get the ID, and then use that ID to insert the new users_files_ref record. I know this will work on a small scale, but I imagine there is a better way of managing these records, especially in a heavy-traffic scenario. I am new to relational database stuff but have been around PHP for a while, so please bear with me here :-) I have primary and foreign keys set up correctly for the files, users, and users_files_ref tables, I'm just wondering how to manage the adding of file records for this scenario? Thanks for any help provided, it's much appreciated. -Wes

    Read the article

  • How could I cache images that I'm pulling from a magento database through ajax?

    - by wes
    Here's script being called through ajax: <?php require_once '../app/Mage.php'; umask(0); /* not Mage::run(); */ Mage::app('default'); $cat_id = ($_POST['cat_id']) ? $_POST['cat_id'] : NULL; try { $category = new Mage_Catalog_Model_Category(); $category->load($cat_id); $collection = $category->getProductCollection(); $output = '<ul>'; foreach ($collection as $product) { $cProduct = Mage::getModel('catalog/product'); $cProduct->load($product->getId()); $output .= '<li><img id="'.$product->getId().'" src="' . (string)Mage::helper('catalog/image')->init($cProduct, 'small_image')->resize(75) . '" class="thumb" /></li>'; } $output .= '</ul>'; echo $output; } catch (Exception $e) { echo 'Caught exception: ', $e->getMessage(), "\n"; } I'm just passing in the Category ID, which I've tacked onto the navigation links, then doing some work to eventually just pass back all product images in that category. I'm using this on a drag and drop build-a-bracelet type of application, and the amount of images returned is sometimes in the 500s. So it get's pretty held up during transmission, sometimes 10 seconds or so. I know I'd do good by caching them some way, just not sure how to go about it. Any help is much appreciated. Thanks. -Wes

    Read the article

  • iptables not allowing mysql connections to aliased ips?

    - by Curtis
    I have a fairly simple iptables firewall on a server that provides MySQL services, but iptables seems to be giving me very inconsistent results. The default policy on the script is as follows: iptables -P INPUT DROP I can then make MySQL public with the following rule: iptables -A INPUT -p tcp --dport 3306 -j ACCEPT With this rule in place, I can connect to MySQL from any source IP to any destination IP on the server without a problem. However, when I try to restrict access to just three IPs by replacing the above line with the following, I run into trouble (xxx=masked octect): iptables -A INPUT -p tcp --dport 3306 -m state --state NEW -s 208.XXX.XXX.184 -j ACCEPT iptables -A INPUT -p tcp --dport 3306 -m state --state NEW -s 208.XXX.XXX.196 -j ACCEPT iptables -A INPUT -p tcp --dport 3306 -m state --state NEW -s 208.XXX.XXX.251 -j ACCEPT Once the above rules are in place, the following happens: I can connect to the MySQL server from the .184, .196 and .251 hosts just fine as long as am connecting to the MySQL server using it's default IP address or an IP alias in the same subnet as the default IP address. I am unable to connect to MySQL using IP aliases that are assigned to the server from a different subnet than the server's default IP when I'm coming from the .184 or .196 hosts, but .251 works just fine. From the .184 or .196 hosts, a telnet attempt just hangs... # telnet 209.xxx.xxx.22 3306 Trying 209.xxx.xxx.22... If I remove the .251 line (making .196 the last rule added), the .196 host still can not connect to MySQL using IP aliases (so it's not the order of the rules that is causing the inconsistent behavior). I know, this particular test was silly as it shouldn't matter what order these three rules are added in, but I figured someone might ask. If I switch back to the "public" rule, all hosts can connect to the MySQL server using either the default or aliased IPs (in either subnet): iptables -A INPUT -p tcp --dport 3306 -j ACCEPT The server is running in a CentOS 5.4 OpenVZ/Proxmox container (2.6.32-4-pve). And, just in case you prefer to see the problem rules in the context of the iptables script, here it is (xxx=masked octect): # Flush old rules, old custom tables /sbin/iptables --flush /sbin/iptables --delete-chain # Set default policies for all three default chains /sbin/iptables -P INPUT DROP /sbin/iptables -P FORWARD DROP /sbin/iptables -P OUTPUT ACCEPT # Enable free use of loopback interfaces /sbin/iptables -A INPUT -i lo -j ACCEPT /sbin/iptables -A OUTPUT -o lo -j ACCEPT # All TCP sessions should begin with SYN /sbin/iptables -A INPUT -p tcp ! --syn -m state --state NEW -j DROP # Accept inbound TCP packets (Do this *before* adding the 'blocked' chain) /sbin/iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT # Allow the server's own IP to connect to itself /sbin/iptables -A INPUT -i eth0 -s 208.xxx.xxx.178 -j ACCEPT # Add the 'blocked' chain *after* we've accepted established/related connections # so we remain efficient and only evaluate new/inbound connections /sbin/iptables -N BLOCKED /sbin/iptables -A INPUT -j BLOCKED # Accept inbound ICMP messages /sbin/iptables -A INPUT -p ICMP --icmp-type 8 -j ACCEPT /sbin/iptables -A INPUT -p ICMP --icmp-type 11 -j ACCEPT # ssh (private) /sbin/iptables -A INPUT -p tcp --dport 22 -m state --state NEW -s xxx.xxx.xxx.xxx -j ACCEPT # ftp (private) /sbin/iptables -A INPUT -p tcp --dport 21 -m state --state NEW -s xxx.xxx.xxx.xxx -j ACCEPT # www (public) /sbin/iptables -A INPUT -p tcp --dport 80 -j ACCEPT /sbin/iptables -A INPUT -p tcp --dport 443 -j ACCEPT # smtp (public) /sbin/iptables -A INPUT -p tcp --dport 25 -j ACCEPT /sbin/iptables -A INPUT -p tcp --dport 2525 -j ACCEPT # pop (public) /sbin/iptables -A INPUT -p tcp --dport 110 -j ACCEPT # mysql (private) /sbin/iptables -A INPUT -p tcp --dport 3306 -m state --state NEW -s 208.xxx.xxx.184 -j ACCEPT /sbin/iptables -A INPUT -p tcp --dport 3306 -m state --state NEW -s 208.xxx.xxx.196 -j ACCEPT /sbin/iptables -A INPUT -p tcp --dport 3306 -m state --state NEW -s 208.xxx.xxx.251 -j ACCEPT Any ideas? Thanks in advance. :-)

    Read the article

  • Anyone know how to get dual screens working on a Dell E6410 laptop with Ubuntu 10.04 64 bit?

    - by Curtis
    I've installed the drivers from nVidia. When I go into the NVIDIA X Server Settings application, in the X Server Display Configuration setcion, and click the "Configure" button, "TwinView" is disabled. Also, clicking "Detect Displays" doesn't pick up my monitor (which is connected through a port replicator - keyboard and mouse in that port replicator work fine). Has anyone else seen this? Is this just a limitation of the current nvidia linux drivers?

    Read the article

  • Why doesn't the People Pane in Outlook 2010 show appointments for individual contacts?

    - by Curtis
    Outlook 2010 will not show appointments in the people pane. Under the activities tab if I go to All Items then eventually the appointments will show, but this takes a seriously long time. If I then click off the tab to look at another field then return to All Items, all the appointments are gone again. I need to be able to: Open a contact and see when that contact has appointments Open an appointment and see which contacts are attached to that appointment It works well from the appointment card to the contact, but has me completely frustrated going from the contact card to find the appontments. I have tried many things but cannot solve this problem. My set up is as follows: Exchange Server Windows 7 Ultimate Indexing enabled Cached exchange mode enabled Help! This is the whole reason I installed Outlook 2010.

    Read the article

  • Server 2008 R2 How to Change Windows 7 Basic Theme Color

    - by Wes Sayeed
    We're deploying thin clients connecting to a terminal server farm. The computers have high visibility to the public and I would like them to at least look presentable and not like something out of 1995. So I installed the Desktop Experience feature and enabled the Theme service. The server will not support Aero because it has no 3D graphics, but we can enable the Windows 7 Basic theme, which has the Aero look without the 3D effects. The problem with that theme is that you can select any window color you want, as long as it's baby boy blue. Is there a way to make those windows another color? The window color controls do nothing.

    Read the article

  • How much should a Systems Administrator be making?

    - by Curtis
    Hello, I'm a Sys Admin for a small (but successful and growing) company (~60 employees). I've got roughly 5-6 years of actual sys admin experience, plus another 5+ years of lower level work in the industry. I'm responsible for most everything above a helpdesk level in the company (server[windows]/network[cisco]/firewall/SAN[emc] setup/configuration/maintenance/troubleshooting), lead many projects, analyze system data -- I'm sure you've heard it all before...I have a bunch of certs, most are just "nice to have", but the ones that actually apply to my role are CCNA, MSCE, VCP (VMware). If things go wrong, I'm first in line to resolve the issue. I'm not management (no one reports to me). I've seen many of these sorts of questions online before, and I know the typical response is "too many variables, depends on location, industry type" etc etc. I'm just wondering (ballpark) what I should be looking for. I've tried to give as much detail as I can, but if I'm missing something, I'd be glad to post it. Thanks anyone.

    Read the article

  • How can I set the time that Windows 7 changes the background in the background slideshow?

    - by wes
    I've set up my Windows 7 background slideshow with a few choice wallpapers and set it to cycle daily, nothing excessive. Glad to see this feature built into Windows now. My problem is that the change happens at 3 in the afternoon, the time when I originally set up the background. I'd like it to switch at night, so I can come in to work each day to a fresh look. Is there a registry entry I can edit to manually set that switch time? Waiting until midnight to set it doesn't count :P

    Read the article

  • My browsers won't use my full screen resolution, IE different

    - by curtis
    My screen resolution is actually 3200x1800, but when I'm in a browser it acts like I have a smaller resolution. How do I get my browsers to use my full resolution? On Chrome it's using 1280x720, and on IE it's using 1600x900. According to whatismyscreenresolution.com, which is showing different values for different browsers. I took a screenshot of them and verified that my resolution is 3200x1800 as that is the pixels in the bitmap. I'm on a laptop with no monitor plugged in. My zoom in both browsers is at 100%. I've tried zooming out below 100% but then the text is unreadable and pixellated. I've tried restarting. Windows 8.1. I've tried the chrome extension OptiZoom and it does nothing. document.body.clientWidth gives 1247, and I want it to give 3200.

    Read the article

  • Pros and Cons of a proxy/gateway server

    - by Curtis
    I'm working with a web app that uses two machines, a BSD server and a Windows 2000 server. When someone goes to our website, they are connected to the BSD server which, using Apache's proxy module, relays the requests & responses between them and the web server on the Windows server. The idea (designed and deployed about 9 years ago) was that it was more secure to have the BSD server as what outside people connected to than the Windows server running the web app. The BSD server is a bare bones install with all unnecessary services & applications removed. These servers are about to be replaced and the big question is, is a cut-down, barebones server necessary for security in this setup. From my research online I don’t see anyone else running a setup like this (I don't see anyone questioning it at least.) If they have a server between the user and the web app server(s), it is caching, compressing, and/or load balancing. Is there anything I’m overlooking by letting people connect directly from the internet ** to a Windows 2008 R2 server that’s running the web application? ** there’s a good hardware firewall between the internet with only minimal ports open Thank you.

    Read the article

  • Exchange 2013 Virtual Machine: Backup just mailboxes and clear logs

    - by Ben Curtis
    I have a Windows Server 2012 machine running Exchange 2013 running as a KVM virtual machine. For my VM guests, I do full image based backups from the host, so that I can quickly restore to any host server simply by copying over the disk image files. This means I don't need a nightly full system backup. That being said, without running a VSS Full Backup, the Exchange logs get massive (Specifically, the performance logs which are 500MB a day). In addition, I would also like to have a nightly backup of just the mail database. What is the best way to accomplish this? A full backup of the C:\Program Files\Microsoft\Exchange Server\V15 folder as I found in one tutorial did not clear out the logs. Thanks, Ben

    Read the article

  • apache sendmail: trying to change user "from" address from apache to domain account

    - by Wes
    I apologize if I am asking a question already answered, but my problem isn't really that I haven't found an answer. I have, in fact, found a half-dozen different "solutions" to my problem, tried them all, in various combinations, and have been consistently unsuccessful. The goal All I want to do is change the envelope "from" address for all email sent from [email protected] to [email protected], always. What I've already done I am running Apache, PHP, and sendmail on CentOS 5.5, [email protected]. We have an SMTP server at 192.168.0.4. The domain's email accounts are all at @domain.org. I have successfully set up "smart host" using this line in the sendmail.mc file: define(`SMART_HOST', `192.168.0.4')dnl Then I set up masquerading, and was hopeful this would solve it. I have this in the .mc file: FEATURE(`masquerade_entire_domain')dnl FEATURE(`masquerade_envelope')dnl FEATURE(`allmasquerade')dnl MASQUERADE_AS(`domain.org')dnl MASQUERADE_DOMAIN(`domain.org.')dnl MASQUERADE_DOMAIN(`localhost.localdomain.')dnl This rewrites "to" addresses, but not "from" addresses. Testing from the command line: sendmail -v [email protected] Always is shown from the local user (in this case root, or my local user account). I had read that "sendmail" command sometimes bypasses masquerading. Nevertheless, using the "mail" command has the same result. After that, I have explored several "solutions", including: mailertable virtusertable FEATURE(`accept_unresolvable_domains')dnl LOCAL_DOMAIN(`localhost.localdomain')dnl FEATURE(`genericstable')dnl /etc/mail/access file /etc/mail/local-host-names file /etc/mail/trusted-users file All to no affect. The last thing I've tried So, I decided to go in a different direction, and try to set the envelope "from" address via PHP, using either the configuration in /etc/php.ini, or adding the -f parameter to the mail() function or to sendmail command. If I run this command: sendmail -v -f [email protected] [email protected] I get this error in /var/log/maillog: Mar 30 08:56:16 localhost sendmail[24022]: p2UCuE8w024022: [email protected], size=5, class=0, nrcpts=1, msgid=<[email protected]>, relay=user@localhost Mar 30 08:56:19 localhost sendmail[24022]: p2UCuE8w024022: [email protected], [email protected] (500/502), delay=00:00:05, xdelay=00:00:03, mailer=relay, pri=30005, relay=[192.168.0.4] [192.168.0.4], dsn=5.1.1, stat=User unknown Mar 30 08:56:19 localhost sendmail[24022]: p2UCuE8w024022: p2UCuE8x024022: DSN: User unknown Mar 30 08:56:23 localhost sendmail[24022]: p2UCuE8x024022: [email protected], delay=00:00:04, xdelay=00:00:04, mailer=relay, pri=31029, relay=[192.168.0.4] [192.168.0.4], dsn=2.0.0, stat=Sent (Ok: queued as B5E2E40E0A2) Which is basically a "User unknown" 550 error. Help Please help. What do I need to change? Should I just start over in the sendmail.mc file? It has a ton of config options stuffed in it, over days of trying things. Why is changing the envelope "from" address via the command line generating a "User unknown" error?

    Read the article

  • Converting a .bat executable to Mac

    - by Wes
    I need some help converting a .bat executable file that I run on our PC at my job so that it works on a mac. Before we upload tar files to our website we run this script which to the best of my knowledge simply unlocks all of the permissions to the tar and all the images within. If someone could help me in "translating" it to run on my Mac that would be awesome! I was hoping I could set up something in Automator Here's the code del images5.tar move images4.tar images5.tar move images3.tar images4.tar move images2.tar images3.tar move images.tar images2.tar cd .. tar --mode=777 -rvf images.tar *.jpg tar --mode=777 -rvf images.tar p move images.tar ./tarpics

    Read the article

  • What is the netmask equivalent on the verision of route for the Mac

    - by Wes Reing
    In order to create some special routes for debugging I used the following command on my linux server: sudo route add -net 10.78.0.0 netmask 255.255.0.0 gw 10.101.1.1 which works, and sets up the routes I need. But when I run the same command on my Mac I get: route: bad address: netmask I'm guessing that the version of route that is included in OS X requires a different format but I'm at a loss to figure it out.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8  | Next Page >