Search Results

Search found 50510 results on 2021 pages for 'static files'.

Page 472/2021 | < Previous Page | 468 469 470 471 472 473 474 475 476 477 478 479  | Next Page >

  • Can't get bonding and bridging to work for KVM

    - by user9546
    Hi everyone. I can't for the life of me get bonding and bridging to work for the KVM setup I'm building. I'm using a fresh install (not an upgrade) of Ubuntu Server 10.10. I have 4 NICs on the same subnet (two intended for each of my two VMs). I'm trying to achieve the setup that Uthark describes here. But following his guidelines didn't work for me. My eth0 and eth1 did not come up, and "brctl show" showed that br0 didn't have any interfaces (the bond). I assumed it didn't work because he's using 10.4, and this article says there's a recent change in bonding: [I can't post more than one hyperlink per post because I'm a newbie.] I had to use this article to get my interfaces to work at all on the same subnet, which is why I have the post-up lines on some of my interfaces: [I can't post more than one hyperlink per post because I'm a newbie.] I installed ifenslave and ethtool. I also created /etc/modprobe.d/aliases.conf with the following content: alias bond0 bonding options bonding mode=6 miimon=100 downdelay=200 updelay=200 And I included "bonding" in /etc/modules So, after several approaches, here is my latest interfaces file: auto lo iface lo inet loopback auto eth5 iface eth5 inet manual auto br5 iface br5 inet static post-up /sbin/ip rule add from [network].79 lookup 10 post-up /sbin/ip route add table 10 default via [network].1 src [network].79 dev br5 address [network].79 netmask 255.255.255.0 network [network].0 broadcast [network].255 gateway [network].1 bridge_ports eth5 bridge_stp off bridge_fd 0 bridge_maxwait 0 auto eth2 iface eth2 inet manual auto br2 iface br2 inet static post-up /sbin/ip rule add from [network].78 lookup 11 post-up /sbin/ip route add table 11 default via [network].1 src [network].78 dev br2 address [network].78 netmask 255.255.255.0 network [network].0 broadcast [network].255 gateway [network].1 bridge_ports eth2 bridge_stp off bridge_fd 0 bridge_maxwait 0 iface eth0 inet manual iface eth1 inet manual auto bond0 iface bond0 inet static bond_miimon 100 bond_mode balance-alb up /sbin/ifenslave bond0 eth0 eth1 down /sbin/ifenslave -d bond0 eth0 eth1 auto br0 iface br0 inet static address [network].60 netmask 255.255.255.0 network [network].0 broadcast [network].255 gateway [network].1 bridge_ports bond0 eth2, eth5, br2, and br5 all seem to be working fine. The only other thing I could find that looked suspicious is an error regarding bonding in /var/log/messages: kernel: [ 3.828684] bonding: Warning: either miimon or arp_interval and arp_ip_target module parameters must be specified, otherwise bonding will not detect link failures! see bonding.txt for details. even though there is a bond-miimon line in /etc/network/interfaces (if that's what they're talking about). Also, the bond seems to go in and out of promiscuous mode several times on boot: Jan 20 14:19:02 kvmhost kernel: [ 3.902378] device bond0 entered promiscuous mode Jan 20 14:19:02 kvmhost kernel: [ 3.902390] device bond0 left promiscuous mode Jan 20 14:19:02 kvmhost kernel: [ 3.902393] device bond0 entered promiscuous mode Jan 20 14:19:02 kvmhost kernel: [ 3.902397] device bond0 left promiscuous mode Jan 20 14:19:03 kvmhost kernel: [ 4.998990] device bond0 entered promiscuous mode Jan 20 14:19:03 kvmhost kernel: [ 4.999005] device bond0 left promiscuous mode Jan 20 14:19:03 kvmhost kernel: [ 4.999008] device bond0 entered promiscuous mode Jan 20 14:19:03 kvmhost kernel: [ 4.999012] device bond0 left promiscuous mode Any advice would be greatly appreciated. It seems that this must be possible, based on other posts, but I can't see what I'm doing wrong. Thanks.

    Read the article

  • Deploying an SSL Application to Windows Azure &ndash; The Dark Secret

    - by ToStringTheory
    When working on an application that had been in production for some time, but was about to have a shopping cart added to it, the necessity for SSL certificates came up.  When ordering the certificates through the vendor, the certificate signing request (CSR) was generated through the providers (http://register.com) web interface, and within a day, we had our certificate. At first, I thought that the certification process would be the hard part…  Little did I know that my fun was just beginning… The Problem I’ll be honest, I had never really secured a site before with SSL.  This was a learning experience for me in the first place, but little did I know that I would be learning more than the simple procedure.  I understood a bit about SSL already, the mechanisms in how it works – the secure handshake, CA’s, chains, etc…  What I didn’t realize was the importance of the CSR in the whole process.  Apparently, when the CSR is created, a public key is created at the same time, as well as a private key that is stored locally on the PC that generated the request.  When the certificate comes back and you import it back into IIS (assuming you used IIS to generate the CSR), all of the information is combined together and the SSL certificate is added into your store. Since at the time the certificate had been ordered for our site, the selection to use the online interface to generate the CSR was chosen, the certificate came back to us in 5 separate files: A root certificate – (*.crt file) An intermediate certifcate – (*.crt file) Another intermediate certificate – (*.crt file) The SSL certificate for our site – (*.crt file) The private key for our certificate – (*.key file) Well, in case you don’t know much about Windows Azure and SSL certificates, the first thing you should learn is that certificates can only be uploaded to Azure if they are in a PFX package – securable by a password.  Also, in the case of our SSL certificate, you need to include the Private Key with the file.  As you can see, we didn’t have a PFX file to upload. If you don’t get the simple PFX from your hosting provider, but rather the multiple files, you will soon find out that the process has turned from something that should be simple – to one that borders on a circle of hell… Probably between the fifth and seventh somewhere… The Solution The solution is to take the files that make up the certificates chain and key, and combine them into a file that can be imported into your local computers store, as well as uploaded to Windows Azure.  I can not take the credit for this information, as I simply researched a while before finding out how to do this. Download the OpenSSL for Windows toolkit (Win32 OpenSSL v1.0.1c) Install the OpenSSL for Windows toolkit Download and move all of your certificate files to an easily accessible location (you'll be pointing to them in the command prompt, so I put them in a subdirectory of the OpenSSL installation) Open a command prompt Navigate to the folder where you installed OpenSSL Run the following command: openssl pkcs12 -export –out {outcert.pfx} –inkey {keyfile.key}      –in {sslcert.crt} –certfile {ca1.crt} –certfile (ca2.crt) From this command, you will get a file, outcert.pfx, with the sum total of your ssl certificate (sslcert.crt), private key {keyfile.key}, and as many CA/chain files as you need {ca1.crt, ca2.crt}. Taking this file, you can then import it into your own IIS in one operation, instead of importing each certificate individually.  You can also upload the PFX to Azure, and once you add the SSL certificate links to the cloud project in Visual Studio, your good to go! Conclusion When I first looked around for a solution to this problem, there were not many places online that had the information that I was looking for.  While what I ended up having to do may seem obvious, it isn’t for everyone, and I hope that this can at least help one developer out there solve the problem without hours of work!

    Read the article

  • Folders in SQL Server Data Tools

    - by jamiet
    Recently I have begun a new project in which I am using SQL Server Data Tools (SSDT) and SQL Server Integration Services (SSIS) 2012. Although I have been using SSDT & SSIS fairly extensively while SQL Server 2012 was in the beta phase I usually find that you don’t learn about the capabilities and quirks of new products until you use them on a real project, hence I am hoping I’m going to have a lot of experiences to share on my blog over the coming few weeks. In this first such blog post I want to talk about file and folder organisation in SSDT. The predecessor to SSDT is Visual Studio Database Projects. When one created a new Visual Studio Database Project a folder structure was provided with “Schema Objects” and “Scripts” in the root and a series of subfolders for each schema: Apparently a few customers were not too happy with the tool arbitrarily creating lots of folders in Solution Explorer and hence SSDT has gone in completely the opposite direction; now no folders are created and new objects will get created in the root – it is at your discretion where they get moved to: After using SSDT for a few weeks I can safely say that I preferred the older way because I never used Solution Explorer to navigate my schema objects anyway so it didn’t bother me how many folders it created. Having said that the thought of a single long list of files in Solution Explorer without any folders makes me shudder so on this project I have been manually creating folders in which to organise files and I have tried to mimic the old way as much as possible by creating two folders in the root, one for all schema objects and another for Pre/Post deployment scripts: This works fine until different developers start to build their own different subfolder structures; if you are OCD-inclined like me this is going to grate on you eventually and hence you are going to want to move stuff around so that you have consistent folder structures for each schema and (if you have multiple databases) each project. Moreover new files get created with a filename of the object name + “.sql” and often people like to have an extra identifier in the filename to indicate the object type: The overall point is this – files and folders in your solution are going to change. Some version control systems (VCSs) don’t take kindly to files being moved around or renamed because they recognise the renamed/moved file simply as a new file and when they do that you lose the revision history which, to my mind, is one of the key benefits of using a VCS in the first place. On this project we have been using Team Foundation Server (TFS) and while it pains me to say it (as I am no great fan of TFS’s version control system) it has proved invaluable when dealing with the SSDT problems that I outlined above because it is integrated right into the Visual Studio IDE. Thus the advice from this blog post is: If you are using SSDT consider using an Visual-Studio-integrated VCS that can easily handle file renames and file moves I suspect that fans of other VCSs will counter by saying that their VCS weapon of choice can handle renames/file moves quite satisfactorily and if that’s the case…great…let me know about them in the comments. This blog post is not an attempt to make people use one particular VCS, only to make people aware of this issue that might rise when using SSDT. More to come in the coming few weeks! @jamiet

    Read the article

  • Oracle Tutor: XPDL conversion (and why you should care)

    - by mary.keane
    You may have noticed that the Oracle Business Process Converter feature in Tutor 14 supports "XPDL" conversion to Oracle Business Process Analysis Suite (BPA), Oracle Business Process Management Suite (BPM), and Oracle Tutor, and you may have briefly wondered "what is XPDL?" before you moved on to the Visio import feature (a very popular feature in Tutor 14). This posting is for those who do not yet understand (or care) about XPDL and process modeling. Many of us (and I'm including myself) have spent years working in the process definition arena: we've written procedures, designed systems and software to help others write procedures, and have been responsible for embedding policies and procedures into training material for employees. We've worked with tools such as Oracle Tutor, Microsoft Visio, Microsoft Word, and UPK. Most of us have never worked with "modeling tools" before, and we certainly never had to understand BPMN. It's a brave new world in this arena, and companies desperately need people with policy and procedural system expertise to be able to work with system analysts so there is a seamless transfer of knowledge from IT to employees. When working with applications, a picture is worth a thousand words, so eventually you're going to need to understand and be able to work with business process models. XPDL is an acronym for XML Process Definition Language, and it is an interchange format for business process models. It allows you to take a BPMN model that was developed in one workflow application such as BizAgi and import it into another workflow application or a true BPMN management system such as Oracle BPM. Specifically, the XPDL format contains the graphical information of a model as well as any executable information. By using a common format, models can be moved from a basic modeling application used by business owners to applications used by system architects. Over 80 applications support the XPDL format, including MetaStorm ProVision, BEA ALBPM, BizAgi, and Tibco. I mention these applications because we have provided XSLT mapping files specifically for these vendors. Oracle Business Process Converter was designed with user extensibility in mind, and thus users can add their own XML files so that additional XPDL models from other vendors can be converted to BPM, BPA, and Oracle Tutor. Instructions on how to add your own files can be found in Appendix 4 of the Oracle Business Converter manual. Let's take a visual look at how this works. Here is an example of a model devloped in BizAgi: This model can be created by the average business user without a large learning curve, and it's a good start for the system analyst who will be adding web services as well as for the business manager who manages the process described in the model. By exporting this model as XPDL, the information can be converted into Oracle BPA and Oracle BPM as well as converted to Oracle Tutor to become the framework for a procedure. Through this conversion feature, one graphic illustration of a business process can be used by a system analyst, business analyst, business manager, and employee, as seen below. Model Converted to Tutor Procedure Below is the task section of the procedure after conversion from an XPDL file. Model converted to BPA Model converted to BPM End users still want step by step instructions on how to perform their jobs, so procedures (Oracle Tutor) and application simulations (UPK) are still a critical piece of the solution. But IT professionals need graphic descriptions of how the applications work, regardless of whether there are any tasks involving humans. Now there is a way to convert procedures (Oracle Tutor docx files) and basic models (XPDL files) so that business managers and system analysts can share process information. References Wikipedia XPDL. Workflow Management Coalition, XPDL Support and Resources Oracle Business Process Converter manual, Oracle Tutor 14 Oracle Business Process Management 11g If you have any XPDL conversion stories to share, we'd love to hear from you. Best wishes for the coming new year, Mary Keane, Senior Development Manager, Oracle Tutor and BPM

    Read the article

  • First Impressions of a MacBook (from a PC guy)

    - by dgreen
    Disclaimer: I've been a PC guy my entire working career. I'd probably characterize myself as a power user. Never afraid to bust out the console line. But working with a Mac is totally foreign to me. So for those Mac guys who are curious, this is how your world appears from the outside to a computer literate person :)My Macbook Air has arrived! And it's a thing of beauty:First, the specs: 13" MacBook Air, 2.0GHz Core i7 processor. Upgraded to 8GB of RAM for an additional $100, SSD flash storage  = 256GB. The plan is ultimately to use this baby for some iOS development but also some decent lifting in Windows with Visual Studio. Done a lot of reading  and between VMWare Fusion, Parallels and Bootcamp...I'm going to go with VMWare Fusion for $49.99And now my impressions (please re-read disclaimer before proceeding!):I open the box and am trying to understand exactly how the magsafe connector works (and how to disconnect it).  Why does it have two socket outlet plugs? Who knows.  I feel like Hansel in Zoolander. The files are "in" the computer.Stuck in my external hard drive (usb). So how do I get to the files? To the Googles!Argh...it can't read my external NTFS drive. Fat32 can't support field over 4GB…problematic since some of my existing VMWare image files are much larger than 4GB. Didn't see this coming.Three year old loves iPhoto. Super easy to use. Don't even know what I'm doing but I've already (accidentally) discovered the image filtering options. Fun stuff.First thing I downloaded ever => Chrome. I need something to ground me, something familiar. My token, if you will (sorry, gratuitous Inception joke).Ok, I get it… Finder == windows explorer. But where is my hierarchical structure? I miss the tree :(On that note, yeah…how do I see what "path" my files reside in? I'm afraid to know the answer. You know what scares more though…this notion of a smart folder. Feel like the godfather - just get the job done, I don't care how you handle it, I don't want to know...just get it done. What the hell is AirDrop?Mail…just worked. Still in shock that they have a free client for yahoo mail (please no yahoo jokes).mail -> deleting a message takes 5 seconds. Have they heard of async?"Command" key instead of "Control" ok, then what the $%&^! is the control key for then"aliases" == shortcuts I thinkI don't see the file system. And I'm scared. All these things I'm downloading…these .dmg files (bad name) where are they going? Can't seem to delete when they're doneUgh...realized need to buy a mini-to-vga adaptor if I want to use my external monitor ($13 on ebay, $39 in apple store).Windows docking is trickiest for me…this notion of detached windows with a menu bar at the top. I don't like this paradigm, it's confusing. But maybe because I've been using Windows for too long.Evernote, Dropbox desktop clients seem almost identical…few quirks here and there I need to get used to.iTunes is still a bit gross. In a weird way it's actually worse on a Mac if thats possible. This is not the MacBook's fault…this is a software design issue. Overall: UI will take some getting used to. Can't decide if this represents the future and I'm stuck in the past…or this is the past and I've been spoiled by the future (which would be Windows…don't be hating I happen to be very productive in Win7)  So there you go - my 90 minute first impression of the MacBook universe.

    Read the article

  • Production Access Denied! Who caused this rule anyways?

    - by Matt Watson
    One of the biggest challenges for most developers is getting access to production servers. In smaller dev teams of less than about 5 people everyone usually has access. Then you hire developer #6, he messes something up in production... and now nobody has access. That is how it always starts in small dev teams. I think just about every rule of life there is gets created this way. One person messes it up for the rest of us. Rules are then put in place to try and prevent it from happening again.Breaking the rules is in our nature. In this example it is for good cause and a necessity to support our applications and troubleshoot problems as they arise. So how do developers typically break the rules? Some create their own method to collect log files off servers so they can see them. Expensive log management programs can collect log files, but log files alone are not enough. Centralizing where important errors are logged to is common. Some lucky developers are given production server access by the IT operations team out of necessity. Wait. That's not fair to all developers and knowingly breaks the company rule!  When customers complain or the system is down, the rules go out the window. Commonly lead developers get production access because they are ultimately responsible for supporting the application and may be the only person who knows how to fix it. The problem with only giving lead developers production access is it doesn't scale from a support standpoint. Those key employees become the go to people to help solve application problems, but they also become a bottleneck. They end up spending up to half of their time every day helping resolve application defects, performance problems, or whatever the fire of the day is. This actually the last thing you want your lead developers doing. They should be working on something more strategic like major enhancements to the product. Having production access can actually be a curse if you are the guy stuck hunting down log files all day. Application defects are good tasks for junior developers. They can usually handle figuring out simple application problems. But nothing is worse than being a junior developer who can't figure out those problems and the back log of them grows and grows. Some of them require production server access to verify a deployment was done correctly, verify config settings, view log files, or maybe just restart an application. Since the junior developers don't have access, they end up bugging the developers who do have access or they track down a system admin to help. It can take hours or days to see server information that would take seconds or minutes if they had access of their own. It is very frustrating to the developer trying to solve the problem, the system admin being forced to help, and most importantly your customers who are not happy about the situation. This process is terribly inefficient. Production database access is also important for solving application problems, but presents a lot of risk if developers are given access. They could see data they shouldn't.  They could write queries on accident to update data, delete data, or merely select every record from every table and bring your database to its knees. Since most of the application we create are data driven, it can be very difficult to track down application bugs without access to the production databases.Besides it being against the rule, why don't all developers have access? Most of the time it comes down to security, change of control, lack of training, and other valid reasons. Developers have been known to tinker with different settings to try and solve a problem and in the process forget what they changed and made the problem worse. So it is a double edge sword. Don't give them access and fixing bugs is more difficult, or give them access and risk having more bugs or major outages being created!Matt WatsonFounder, CEOStackifyAgile Support for Agile Developers

    Read the article

  • Utility to Script SQL Server Configuration

    - by Bill Graziano
    I wrote a small utility to script some key SQL Server configuration information. I had two goals for this utility: Assist with disaster recovery preparation Identify configuration changes I’ve released the application as open source through CodePlex. You can download it from CodePlex at the Script SQL Server Configuration project page. The application is a .NET 2.0 console application that uses SMO. It writes its output to a directory that you specify.  Disaster Planning ScriptSqlConfig generates scripts for logins, jobs and linked servers.  It writes the properties and configuration from the instance to text files. The scripts are designed so they can be run against a DR server in the case of a disaster. The properties and configuration will need to be manually compared. Each job is scripted to its own file. Each linked server is scripted to its own file. The linked servers don’t include the password if you use a SQL Server account to connect to the linked server. You’ll need to store those somewhere secure. All the logins are scripted to a single file. This file includes windows logins, SQL Server logins and any server role membership.  The SQL Server logins are scripted with the correct SID and hashed passwords. This means that when you create the login it will automatically match up to the users in the database and have the correct password. This is the only script that I programmatically generate rather than using SMO. The SQL Server configuration and properties are scripted to text files. These will need to be manually reviewed in the event of a disaster. Or you could DIFF them with the configuration on the new server. Configuration Changes These scripts and files are all designed to be checked into a version control system.  The scripts themselves don’t include any date specific information. In my environments I run this every night and check in the changes. I call the application once for each server and script each server to its own directory.  The process will delete any existing files before writing new ones. This solved the problem I had where the scripts for deleted jobs and linked servers would continue to show up.  To see any changes I just need to query the version control system to show many any changes to the files. Database Scripting Utilities that script database objects are plentiful.  CodePlex has at least a dozen of them including one I wrote years ago. The code is so easy to write it’s hard not to include that functionality. This functionality wasn’t high on my list because it’s included in a database backup.  Unless you specify the /nodb option, the utility will script out many user database objects. It will script one object per file. It will script tables, stored procedures, user-defined data types, views, triggers, table types and user-defined functions. I know there are more I need to add but haven’t gotten around it yet. If there’s something you need, please log an issue and get it added. Since it scripts one object per file these really aren’t appropriate to recreate an empty database. They are really good for checking into source control every night and then seeing what changed. I know everyone tells me all their database objects are in source control but a little extra insurance never hurts. Conclusion I hope this utility will help a few of you out there. My goal is to have it script all server objects that aren’t contained in user databases. This should help with configuration changes and especially disaster recovery.

    Read the article

  • JOGL2 test compiles, but doesn't execute - help?

    - by Chuchinyi
    I have a problem with JOGL2. My JOGL2Template.java compiles fine, but executing it results in the following error: D:\java\java\jogl>javac JOGL2Template.java <== compile ok D:\java\java\jogl>java JOGL2Template <== execute error Exception in thread "main" java.lang.ExceptionInInitializerError at javax.media.opengl.GLProfile.<clinit>(GLProfile.java:1176) at JOGL2Template.<init>(JOGL2Template.java:24) at JOGL2Template.main(JOGL2Template.java:57) Caused by: java.lang.SecurityException: no certificate for gluegen-rt.dll in D:\ java\lib\gluegen-rt-natives-windows-i586.jar at com.jogamp.common.util.JarUtil.validateCertificate(JarUtil.java:350) at com.jogamp.common.util.JarUtil.validateCertificates(JarUtil.java:324) at com.jogamp.common.util.cache.TempJarCache.validateCertificates(TempJa rCache.java:328) at com.jogamp.common.util.cache.TempJarCache.bootstrapNativeLib(TempJarC ache.java:283) at com.jogamp.common.os.Platform$3.run(Platform.java:308) at java.security.AccessController.doPrivileged(Native Method) at com.jogamp.common.os.Platform.loadGlueGenRTImpl(Platform.java:298) at com.jogamp.common.os.Platform.<clinit>(Platform.java:207) ... 3 more Here is the JOGL2Template.java source code: import java.awt.Dimension; import java.awt.Frame; import java.awt.event.WindowAdapter; import java.awt.event.WindowEvent; import javax.media.opengl.GLAutoDrawable; import javax.media.opengl.GLCapabilities; import javax.media.opengl.GLEventListener; import javax.media.opengl.GLProfile; import javax.media.opengl.awt.GLCanvas; import com.jogamp.opengl.util.FPSAnimator; import javax.swing.JFrame; /* * JOGL 2.0 Program Template For AWT applications */ public class JOGL2Template extends JFrame implements GLEventListener { private static final int CANVAS_WIDTH = 640; // Width of the drawable private static final int CANVAS_HEIGHT = 480; // Height of the drawable private static final int FPS = 60; // Animator's target frames per second // Constructor to create profile, caps, drawable, animator, and initialize Frame public JOGL2Template() { // Get the default OpenGL profile that best reflect your running platform. GLProfile glp = GLProfile.getDefault(); // Specifies a set of OpenGL capabilities, based on your profile. GLCapabilities caps = new GLCapabilities(glp); // Allocate a GLDrawable, based on your OpenGL capabilities. GLCanvas canvas = new GLCanvas(caps); canvas.setPreferredSize(new Dimension(CANVAS_WIDTH, CANVAS_HEIGHT)); canvas.addGLEventListener(this); // Create a animator that drives canvas' display() at 60 fps. final FPSAnimator animator = new FPSAnimator(canvas, FPS); addWindowListener(new WindowAdapter() { // For the close button @Override public void windowClosing(WindowEvent e) { // Use a dedicate thread to run the stop() to ensure that the // animator stops before program exits. new Thread() { @Override public void run() { animator.stop(); System.exit(0); } }.start(); } }); add(canvas); pack(); setTitle("OpenGL 2 Test"); setVisible(true); animator.start(); // Start the animator } public static void main(String[] args) { new JOGL2Template(); } @Override public void init(GLAutoDrawable drawable) { // Your OpenGL codes to perform one-time initialization tasks // such as setting up of lights and display lists. } @Override public void display(GLAutoDrawable drawable) { // Your OpenGL graphic rendering codes for each refresh. } @Override public void reshape(GLAutoDrawable drawable, int x, int y, int w, int h) { // Your OpenGL codes to set up the view port, projection mode and view volume. } @Override public void dispose(GLAutoDrawable drawable) { // Hardly used. } } Any ideas what might be the cause of these errors?

    Read the article

  • With a little effort you can &ldquo;SEMI&rdquo;-protect your C# assemblies with obfuscation.

    - by mbcrump
    This method will not protect your assemblies from a experienced hacker. Everyday we see new keygens, cracks, serials being released that contain ways around copy protection from small companies. This is a simple process that will make a lot of hackers quit because so many others use nothing. If you were a thief would you pick the house that has security signs and an alarm or one that has nothing? To so begin: Obfuscation is the concealment of meaning in communication, making it confusing and harder to interpret. Lets begin by looking at the cartoon below:     You are probably familiar with the term and probably ignored this like most programmers ignore user security. Today, I’m going to show you reflection and a way to obfuscate it. Please understand that I am aware of ways around this, but I believe some security is better than no security.  In this sample program below, the code appears exactly as it does in Visual Studio. When the program runs, you get either a true or false in a console window. Sample Program. using System; using System.Diagnostics; using System.Linq;   namespace ObfuscateMe {     class Program     {                static void Main(string[] args)         {               Console.WriteLine(IsProcessOpen("notepad")); //Returns a True or False depending if you have notepad running.             Console.ReadLine();         }             public static bool IsProcessOpen(string name)         {             return Process.GetProcesses().Any(clsProcess => clsProcess.ProcessName.Contains(name));         }     } }   Pretend, that this is a commercial application. The hacker will only have the executable and maybe a few config files, etc. After reviewing the executable, he can determine if it was produced in .NET by examing the file in ILDASM or Redgate’s Reflector. We are going to examine the file using RedGate’s Reflector. Upon launch, we simply drag/drop the exe over to the application. We have the following for the Main method:   and for the IsProcessOpen method:     Without any other knowledge as to how this works, the hacker could export the exe and get vs project build or copy this code in and our application would run. Using Reflector output. using System; using System.Diagnostics; using System.Linq;   namespace ObfuscateMe {     class Program     {                static void Main(string[] args)         {               Console.WriteLine(IsProcessOpen("notepad"));             Console.ReadLine();         }             public static bool IsProcessOpen(string name)         {             return Process.GetProcesses().Any<Process>(delegate(Process clsProcess)             {                 return clsProcess.ProcessName.Contains(name);             });         }       } } The code is not identical, but returns the same value. At this point, with a little bit of effort you could prevent the hacker from reverse engineering your code so quickly by using Eazfuscator.NET. Eazfuscator.NET is just one of many programs built for this. Visual Studio ships with a community version of Dotfoscutor. So download and load Eazfuscator.NET and drag/drop your exectuable/project into the window. It will work for a few minutes depending if you have a quad-core or not. After it finishes, open the executable in RedGate Reflector and you will get the following: Main After Obfuscation IsProcessOpen Method after obfuscation: As you can see with the jumbled characters, it is not as easy as the first example. I am aware of methods around this, but it takes more effort and unless the hacker is up for the challenge, they will just pick another program. This is also helpful if you are a consultant and make clients pay a yearly license fee. This would prevent the average software developer from jumping into your security routine after you have left. I hope this article helped someone. If you have any feedback, please leave it in the comments below.

    Read the article

  • Problem in udp socket programing in c

    - by Md. Talha
    I complile the following C code of UDP client after I run './udpclient localhost 9191' in terminal.I put "Enter Text= " as Hello, but it is showing error in sendto as below: Enter text: hello hello : error in sendto()guest-1SDRJ2@md-K42F:~/Desktop$ " Note: I open 1st the server port as below in other terminal ./server 9191. I beleive there is no error in server code. The udp client is not passing message to server. If I don't use thread , the message is passing .But I have to do it by thread. UDP client Code: /* simple UDP echo client */ #include <sys/types.h> #include <sys/socket.h> #include <netinet/in.h> #include <arpa/inet.h> #include <netdb.h> #include <stdio.h> #include <pthread.h> #define STRLEN 1024 static void *readdata(void *); static void *writedata(void *); int sockfd, n, slen; struct sockaddr_in servaddr; char sendline[STRLEN], recvline[STRLEN]; int main(int argc, char *argv[]) { pthread_t readid,writeid; struct sockaddr_in servaddr; struct hostent *h; if(argc != 3) { printf("Usage: %s <proxy server ip> <port>\n", argv[0]); exit(0); } /* create hostent structure from user entered host name*/ if ( (h = gethostbyname(argv[1])) == NULL) { printf("\n%s: error in gethostbyname()", argv[0]); exit(0); } /* create server address structure */ bzero(&servaddr, sizeof(servaddr)); /* initialize it */ servaddr.sin_family = AF_INET; memcpy((char *) &servaddr.sin_addr.s_addr, h->h_addr_list[0], h->h_length); servaddr.sin_port = htons(atoi(argv[2])); /* get the port number from argv[2]*/ /* create a UDP socket: SOCK_DGRAM */ if ( (sockfd = socket(AF_INET,SOCK_DGRAM, 0)) < 0) { printf("\n%s: error in socket()", argv[0]); exit(0); } pthread_create(&readid,NULL,&readdata,NULL); pthread_create(&writeid,NULL,&writedata,NULL); while(1) { }; close(sockfd); } static void * writedata(void *arg) { /* get user input */ printf("\nEnter text: "); do { if (fgets(sendline, STRLEN, stdin) == NULL) { printf("\n%s: error in fgets()"); exit(0); } /* send a text */ if (sendto(sockfd, sendline, sizeof(sendline), 0, (struct sockaddr *) &servaddr, sizeof(servaddr)) < 0) { printf("\n%s: error in sendto()"); exit(0); } }while(1); } static void * readdata(void *arg) { /* wait for echo */ slen = sizeof(servaddr); if ( (n = recvfrom(sockfd, recvline, STRLEN, 0, (struct sockaddr *) &servaddr, &slen)) < 0) { printf("\n%s: error in recvfrom()"); exit(0); } /* null terminate the string */ recvline[n] = 0; fputs(recvline, stdout); }

    Read the article

  • git | error: Unable to append to .git/logs/refs/remotes/origin/master: Permission denied [SOLVED]

    - by Corbin Tarrant
    I am having a strange issue that I can't seem to resolve. Here is what happend: I had some log files in a github repository that I didn't want there. I found this script that removes files completely from git history like so: #!/bin/bash set -o errexit # Author: David Underhill # Script to permanently delete files/folders from your git repository. To use # it, cd to your repository's root and then run the script with a list of paths # you want to delete, e.g., git-delete-history path1 path2 if [ $# -eq 0 ]; then exit 0are still fi # make sure we're at the root of git repo if [ ! -d .git ]; then echo "Error: must run this script from the root of a git repository" exit 1 fi # remove all paths passed as arguments from the history of the repo files=$@ git filter-branch --index-filter "git rm -rf --cached --ignore-unmatch $files" HEAD # remove the temporary history git-filter-branch otherwise leaves behind for a long time rm -rf .git/refs/original/ && git reflog expire --all && git gc --aggressive --prune I, of course, made a backup first and then tried it. It seemed to work fine. I then did a git push -f and was greeted with the following messages: error: Unable to append to .git/logs/refs/remotes/origin/master: Permission denied error: Cannot update the ref 'refs/remotes/origin/master'. Everything seems to have pushed fine though, because the files seem to be gone from the GitHub repository, if I try and push again I get the same thing: error: Unable to append to .git/logs/refs/remotes/origin/master: Permission denied error: Cannot update the ref 'refs/remotes/origin/master'. Everything up-to-date EDIT $ sudo chgrp {user} .git/logs/refs/remotes/origin/master $ sudo chown {user} .git/logs/refs/remotes/origin/master $ git push Everything up-to-date Thanks! EDIT Uh Oh. Problem. I've been working on this project all night and just went to commit my changes: error: Unable to append to .git/logs/refs/heads/master: Permission denied fatal: cannot update HEAD ref So I: sudo chown {user} .git/logs/refs/heads/master sudo chgrp {user} .git/logs/refs/heads/master I try the commit again and I get: error: Unable to append to .git/logs/HEAD: Permission denied fatal: cannot update HEAD ref So I: sudo chown {user} .git/logs/HEAD sudo chgrp {user} .git/logs/HEAD And then I try the commit again: 16 files changed, 499 insertions(+), 284 deletions(-) create mode 100644 logs/DBerrors.xsl delete mode 100644 logs/emptyPHPerrors.php create mode 100644 logs/trimXMLerrors.php rewrite public/codeCore/Classes/php/DatabaseConnection.php (77%) create mode 100644 public/codeSite/php/init.php $ git push Counting objects: 49, done. Delta compression using up to 2 threads. Compressing objects: 100% (27/27), done. Writing objects: 100% (27/27), 7.72 KiB, done. Total 27 (delta 15), reused 0 (delta 0) To [email protected]:IAmCorbin/MooKit.git 59da24e..68b6397 master -> master Hooray. I jump on http://GitHub.com and check out the repository, and my latest commit is no where to be found. ::scratch head:: So I push again: Everything up-to-date Umm...it doesn't look like it. I've never had this issue before, could this be a problem with github? or did I mess something up with my git project? EDIT Nevermind, I did a simple: git push origin master and it pushed fine.

    Read the article

  • Forbidden Patterns Check-In Policy in TFS 2010

    - by Jaxidian
    I've been trying to use the Forbidden Patterns part of the TFS 2010 Power Tools and I'm just not understanding something - I simply cannot get anything to change as I try to use this! I'm using the version that was released recently (I believe April 23, 2010), so it's not an old version. First off, yes, I know it's regex based, so let's clear that doubt... I have tried to block the following scenarios: 1) I have modified all of my T4 EF templates to generate files named EntityName.gen.cs. I then attempted to prevent TFS from wanting to check those files in. I used the regular expression \.gen\.cs\z and it didn't change a single thing! I even tried it without the \z and nadda! 2) I don't want app.config and web.config files to be checked-in by default because we have these things stored into app.config.base and web.config.base files that our build scripts use to generate our per-environment app.config and web.config files. As such, I tried the following regexes and again, nothing worked! web\.config\z, app\.config\z, web\.release\.config\z and web\.debug\.config\z. What is it that I am screwing up with this?

    Read the article

  • MVVM Prism Nested Regions Can't Find Child Regions

    - by Garry Clark
    I have a Menu (Telerik RadMenu) that has nested regions defined in the Shell. In my modules I will register the modules menu or toolbar items with these regions. Everything works fine for the root regions, but when I try and add something to a child region, such as the File region on the Menu, I get the error "The exception message was: The region manager does not contain the FileMenuRegion region." However like I said if I change this code regionManager.Regions[RegionNames.FileMenuRegion].Add(menuItem); to this regionManager.Regions[RegionNames.MainMenuRegion].Add(menuItem); everything works fine. Below is the XAML for my menu so you can see the region names and how they are constructed. Any help would greatly be appreciated as this is bewildering and driving me crazy. Menu <telerikNavigation:RadMenu x:Name="menuMain" DockPanel.Dock="Top" prismrgn:RegionManager.RegionName="{x:Static i:RegionNames.MainMenuRegion}" telerik:StyleManager.Theme="{Binding Source={StaticResource settings}, Path=Default.CurrentTheme}"> <telerikNavigation:RadMenuItem Header="{x:Static p:Resources.File}" prismrgn:RegionManager.RegionName="{x:Static i:RegionNames.FileMenuRegion}"> <telerikNavigation:RadMenuItem Header="{x:Static p:Resources.Exit}" Command="{Binding ExitCommand}"> <telerikNavigation:RadMenuItem.Icon> <Image Source="../Resources/Close.png" Stretch="None" /> </telerikNavigation:RadMenuItem.Icon> </telerikNavigation:RadMenuItem> </telerikNavigation:RadMenuItem> </telerikNavigation:RadMenu>

    Read the article

  • Noob Objective-C/C++ - Linker Problem/Method Signature Problem

    - by Josh
    There is a static class Pipe, defined in C++ header that I'm including. The static method I'm interested in calling (from Objetive-c) is here: static ERC SendUserGet(const UserId &_idUser,const GUID &_idStyle,const ZoneId &_idZone,const char *_pszMsg); I have access to an objetive-c data structure that appears to store a copy of userID, and zoneID -- it looks like: @interface DataBlock : NSObject { GUID userID; GUID zoneID; } Looked up the GUID def, and its a struct with a bunch of overloaded operators for equality. UserId and ZoneId from the first function signature are #typedef GUID Now when I try to call the method, no matter how I cast it (const UserId), (UserId), etc, I get the following linker error: Ld build/Debug/Seeker.app/Contents/MacOS/Seeker normal i386 cd /Users/josh/Development/project/Mac/Seeker setenv MACOSX_DEPLOYMENT_TARGET 10.5 /Developer/usr/bin/g++-4.2 -arch i386 -isysroot /Developer/SDKs/MacOSX10.5.sdk -L/Users/josh/Development/TS/Mac/Seeker/build/Debug -L/Users/josh/Development/TS/Mac/Seeker/../../../debug -L/Developer/Platforms/iPhoneOS.platform/Developer/usr/lib/gcc/i686-apple-darwin10/4.2.1 -F/Users/josh/Development/TS/Mac/Seeker/build/Debug -filelist /Users/josh/Development/TS/Mac/Seeker/build/Seeker.build/Debug/Seeker.build/Objects-normal/i386/Seeker.LinkFileList -mmacosx-version-min=10.5 -framework Cocoa -framework WebKit -lSAPI -lSPL -o /Users/josh/Development/TS/Mac/Seeker/build/Debug/Seeker.app/Contents/MacOS/Seeker Undefined symbols: "SocPipe::SendUserGet(_GUID const&, _GUID const&, _GUID const&, char const*)", referenced from: -[PeoplePaneController clickGet:] in PeoplePaneController.o ld: symbol(s) not found collect2: ld returned 1 exit status Is this a type/function signature error, or truly some sort of linker error? I have the headers where all these types and static classes are defined #imported -- I tried #include too, just in case, since I'm already stumbling :P Forgive me, I come from a web tech background, so this c-style memory management and immutability stuff is super hazy. Edit: Added full linker error text. Changed "function" to "method" Thanks, Josh

    Read the article

  • Installshield Development Error durring the Installation process

    - by Alopekos
    Hey guys, I've been stuck by a problem for the past couple days that makes no sense to me. My installer builds fine in the Installshiled IDE but when it is about to finish the installation, int gets two errors then rollbacks: installation failure. Right when the install bar is at about 100%, an error box pops up that states this: "Error 1001.Exception occurred while initializing the installation: System.IO.FileLoadException: Attempt to load an unverifiable executable with fixups (IAT with more than 2 sections or a TLS section.) (Exception from HRESULT: 0x80131019)." The box pops up once, then the installer flashes its status to "rollback" then pops up another error box, then once 'ok'ed it procedes to rollback as usual. I don't understand that error message so i looked in the msi loggings and found this: "InstallShield 13:20:08: Initializing Property Bag... InstallShield 13:20:08: Getting file count from property bag InstallShield 13:20:08: File Count : 7 InstallShield 13:20:08: Sorting Based On Order... InstallShield 13:20:08: This setup is running on a 32-bit Windows...No need to load ISBEW64.exe InstallShield 13:20:08: Registering file C:\Program Files\Cadwell\Easy III\QMWSChartDataServer.dll (32-bit) InstallShield 13:20:09: Registering file C:\Program Files\Cadwell\Easy III\DataDelivery.dll (32-bit) InstallShield 13:20:09: Registering file C:\Program Files\Cadwell\Easy III\QMGlobalData.dll (32-bit) InstallShield 13:20:09: Registering file C:\Program Files\Cadwell\Easy III\QMAdoDB.dll (32-bit) InstallShield 13:20:09: Registering file C:\Program Files\Cadwell\Easy III\QMPatientData.dll (32-bit) InstallShield 13:20:09: Registering file C:\Program Files\Cadwell\Easy III\MedShareGlobalData.dll (32-bit) InstallShield 13:20:09: Registering file C:\Program Files\Cadwell\Easy III\MedDirectory.dll (32-bit) InstallShield 13:20:09: Begin Comitting Property Bag InstallShield 13:20:09: Write KeyList count InstallShield 13:20:09: Finished Comitting Property Bag Action 13:20:09: _EBDE7916DF6AF3B644016C54F66930DC.commit. Action 13:20:09: _EBDE7916DF6AF3B644016C54F66930DC.rollback. Action 13:20:09: _EBDE7916DF6AF3B644016C54F66930DC.install. Error 1001.Exception occurred while initializing the installation: System.IO.FileLoadException: Attempt to load an unverifiable executable with fixups (IAT with more than 2 sections or a TLS section.) (Exception from HRESULT: 0x80131019). MSI (s) (34!84) [13:20:26:455]: Info 2769.Custom Action _EBDE7916DF6AF3B644016C54F66930DC.install did not close 1 MSIHANDLEs. Action ended 13:20:26: InstallFinalize. Return value 3. Action 13:20:26: Rollback. Rolling back action: Rollback: _EBDE7916DF6AF3B644016C54F66930DC.install Rollback: _EBDE7916DF6AF3B644016C54F66930DC.rollback Error 1001.Exception occurred while initializing the installation: System.IO.FileLoadException: Attempt to load an unverifiable executable with fixups (IAT with more than 2 sections or a TLS section.) (Exception from HRESULT: 0x80131019). MSI (s) (34!E8) [13:20:27:036]: Info 2769.Custom Action _EBDE7916DF6AF3B644016C54F66930DC.rollback did not close 1 MSIHANDLEs. Rollback: _EBDE7916DF6AF3B644016C54F66930DC.commit Rollback: ISSelfRegisterFiles Rollback: Registering modules Rollback: Registering type libraries Rollback: Writing system registry values Rollback: Registering program identifiers" All rollbacking commands after this point. For some reason it looks like to me that installshield is trying to launch my program before it finishes the installation, even when I told it to prompt the user to decide to launch. Is this a registering command system that makes it attemp or what? I've been scouring the web all day and I've found some ideas, but I havent seen any solutions as of yet. The installers that I've tried(and failed) have always needed to be Setup.exes, when i try to build a .msi only setup I get this error message. It may help someone who knows more about this system than I do. Your project contains InstallShield prerequisites. A Setup.exe setup launcher is required if you are build a release that includes InstallShield prerequisites. Change your release settings to build Setup.exe, or remove the prerequisites from your project. -7076 There's nothing on the site that has anything from the error code, so I'm at a loss. Thanks for any help possible, I'm an intern and I'm fresh out of ideas... Josh System: XP SP3 Installshield 2010 Pro Install being tested on a VirtualPC

    Read the article

  • App.Config Transformation for Visual Studio 2010?

    - by Amitabh
    For Visual Studio 2010 Web based application we have Config Transformation features by which we can maintain multiple configuration files for different environments. But the same feature is not available for App.Config files for Windows Services/WinForms or Console Application. There is a workaround available as suggested on the following link. http://vishaljoshi.blogspot.com/2010/05/applying-xdt-magic-to-appconfig.html However it is not straightforward and requires no of steps. Is there an easier way to achieve the same for App.Config files?

    Read the article

  • XP Deploying issues due to msvcr90.dll trying to load FlsAlloc

    - by Sorin Sbarnea
    I have an application build with VS2008 SP1a (9.0.30729.4148) on Windows 7 x64 that does not want to start under XP. The message is The application failed to initialize properly (0x80000003). Click on OK to terminate the application.. I checked with depends.exe and found that msvcr90.dll does try to load FlsAlloc from KERNEL32.dll - and FlsAlloc is available only starting with Vista. I'm sure it is not used by the application. How to solve the issue? The SxS package is already installed on the target machine - In fact I have all 3 versions of 9.0 SxS (initial release, sp1, and sp1+security patch) Application is compiled with _BIND_TO_CURRENT_VCLIBS_VERSION=1 Also I defined the right target Windows version on stdafx.h #define WINVER 0x0500 #define _WIN32_WINNT 0x0500 Manifest file <assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0"> <trustInfo xmlns="urn:schemas-microsoft-com:asm.v3"> <security> <requestedPrivileges> <requestedExecutionLevel level="asInvoker" uiAccess="false" /> </requestedPrivileges> </security> </trustInfo> <dependency> <dependentAssembly> <assemblyIdentity type="win32" name="Microsoft.VC90.CRT" version="9.0.30729.4148" processorArchitecture="x86" publicKeyToken="1fc8b3b9a1e18e3b" /> </dependentAssembly> </dependency> <dependency> <dependentAssembly> <assemblyIdentity type="win32" name="Microsoft.VC90.MFC" version="9.0.30729.4148" processorArchitecture="x86" publicKeyToken="1fc8b3b9a1e18e3b" /> </dependentAssembly> </dependency> </assembly> Result from depends Started "c:\program files\app\app.EXE" (process 0xA0) at address 0x00400000. Successfully hooked module. Loaded "c:\windows\system32\NTDLL.DLL" at address 0x7C900000. Successfully hooked module. Loaded "c:\windows\system32\KERNEL32.DLL" at address 0x7C800000. Successfully hooked module. Loaded "c:\program files\app\MFC90.DLL" at address 0x785E0000. Successfully hooked module. Loaded "c:\program files\app\MSVCR90.DLL" at address 0x78520000. Successfully hooked module. Loaded "c:\windows\system32\USER32.DLL" at address 0x7E410000. Successfully hooked module. Loaded "c:\windows\system32\GDI32.DLL" at address 0x77F10000. Successfully hooked module. Loaded "c:\windows\system32\SHLWAPI.DLL" at address 0x77F60000. Successfully hooked module. Loaded "c:\windows\system32\ADVAPI32.DLL" at address 0x77DD0000. Successfully hooked module. Loaded "c:\windows\system32\RPCRT4.DLL" at address 0x77E70000. Successfully hooked module. Loaded "c:\windows\system32\SECUR32.DLL" at address 0x77FE0000. Successfully hooked module. Loaded "c:\windows\system32\MSVCRT.DLL" at address 0x77C10000. Successfully hooked module. Loaded "c:\windows\system32\COMCTL32.DLL" at address 0x5D090000. Successfully hooked module. Loaded "c:\windows\system32\MSIMG32.DLL" at address 0x76380000. Successfully hooked module. Loaded "c:\windows\system32\SHELL32.DLL" at address 0x7C9C0000. Successfully hooked module. Loaded "c:\windows\system32\OLEAUT32.DLL" at address 0x77120000. Successfully hooked module. Loaded "c:\windows\system32\OLE32.DLL" at address 0x774E0000. Successfully hooked module. Entrypoint reached. All implicit modules have been loaded. DllMain(0x78520000, DLL_PROCESS_ATTACH, 0x0012FD30) in "c:\program files\app\MSVCR90.DLL" called. GetProcAddress(0x7C800000 [c:\windows\system32\KERNEL32.DLL], "FlsAlloc") called from "c:\program files\app\MSVCR90.DLL" at address 0x78543ACC and returned NULL. Error: The specified procedure could not be found (127). GetProcAddress(0x7C800000 [c:\windows\system32\KERNEL32.DLL], "FlsGetValue") called from "c:\program files\app\MSVCR90.DLL" at address 0x78543AD9 and returned NULL. Error: The specified procedure could not be found (127). GetProcAddress(0x7C800000 [c:\windows\system32\KERNEL32.DLL], "FlsSetValue") called from "c:\program files\app\MSVCR90.DLL" at address 0x78543AE6 and returned NULL. Error: The specified procedure could not be found (127). GetProcAddress(0x7C800000 [c:\windows\system32\KERNEL32.DLL], "FlsFree") called from "c:\program files\app\MSVCR90.DLL" at address 0x78543AF3 and returned NULL. Error: The specified procedure could not be found (127).

    Read the article

  • How to flatten already filled out PDF form using iTextSharp

    - by andryuha
    I'm using iTextSharp to merge a number of pdf files together into a single file. I'm using method described in iTextSharp official tutorials, specifically here, which merges files page by page via PdfWriter and PdfImportedPage. Turns out some of the files I need to merge are filled out PDF Forms and using this method of merging form data is lost. I've see several examples of using PdfStamper to fill out forms and flatten them. What I can't find, is a way to flatten already filled out PDF Form and hopefully merge it with the other files without saving it flattened out version first. Thanks

    Read the article

  • Too nervous to install

    - by The Prop
    Yesterday I (a professional rugby prop of somewhat limited intellect) landed in http://htmlagilitypack.codeplex.com/ and found myself stranded in a town with no signposts. The locals don't need signposts - they know their way around - so who gives a hoot about visitors? Well I'm a visitor and I'm lost. Here's my plea to the good burgesses of Codeplex-sans-signs: HELP!! Let me back-track and explain what landed me at the bottom of this tangled ruck. There's a "Download" button positioned near the top-right of the Codeplex web page, right? Like the Sword of Damocles, a down-arrow to the left of the button indicates, presumably, what a download would include: CURRENT 1.4.0 Stable DATE Fri May 7 2010 at 7:00 AM STATUS Stable With a simple-minded confidence that has since deserted me (the confidence - not the simple-mindedness), I clicked "Download". This introduced 3 new files to my computer: HtmlAgilityPack.dll, HtmlAgilityPack.pdb, and HtmlAgilityPack.XML This is when the first stab of doubt penetrated that globe between my cauliflower ears that I call a head. Where's the dot cs? Somewhere in Codeplex, I'd read advice to another lost soul to "download and build the HTMLAgilityPack solution". As I've done so many times as an All Black prop, I glared at the opposition front row - ah, I mean the 3 new files. Shouldn't one of them have a ".cs" on the back of his jersey - er, on the end of its name? Or is this just how they play the game in Codeplex-sans-signs? Undaunted (props have more courage than sense) I packed into my first C# scrum. The half-back feeds the ball in, and the front rows collapse - er, the debugging stops at this line of my code: "HtmlAgilityPack.HtmlDocument doc = new HtmlAgilityPack.HtmlDocument();" Then the Referee blows his whistle and announces one of those verdicts that's utterly indecipherable to your average loose-head prop: Locating source for 'C:\Source\htmlagilitypack\Trunk\HtmlAgilityPack\HtmlDocument.cs'. Checksum: MD5 {62 bc f3 7e 9a 92 a6 32 7 d6 5b f8 76 59 7b 5b} The file 'C:\Source\htmlagilitypack\Trunk\HtmlAgilityPack\HtmlDocument.cs' does not exist. Looking in script documents for 'C:\Source\htmlagilitypack\Trunk\HtmlAgilityPack\HtmlDocument.cs'... Looking in the projects for 'C:\Source\htmlagilitypack\Trunk\HtmlAgilityPack\HtmlDocument.cs'. The file was not found in a project. Looking in directory 'C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\vc7\atlmfc'... Looking in directory 'C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\vc7\crt'... The debugger will ask the user to find the file: C:\Source\htmlagilitypack\Trunk\HtmlAgilityPack\HtmlDocument.cs. The user pressed Cancel [a brain-stemmer from the prop] in the Find Source dialog. The debug source files settings for the active solution have been modified so that the debugger will not ask the user to find the file: C:\Source\htmlagilitypack\Trunk\HtmlAgilityPack\HtmlDocument.cs. The debugger could not locate the source file 'C:\Source\htmlagilitypack\Trunk\HtmlAgilityPack\HtmlDocument.cs'. Even if it had been the first 50 stanzas of "Eskimo Nell", I couldn't have been more shocked. I'm so shocked, my jaws clamp shut around the opposition hooker's ear. He thumbs me in the iris. With a cornea-torn eye I peer at the Codeplex site. My brain stem sparks and I punch the "View all downloads" link. It sparks four more times on each download link, and.. lo! FOUR files this time: HAPExplorer.zip, HtmlAgilityPack.1.4.0.Source.zip, HtmlAgilityPack.1.4.0.zip, HtmlAgilityPack.Documentation.chm But... is this not the same place arrived at recently by my flat-mate Chaz, journalist extraordinaire? (Chaz, if you're reading this, I'm not plugging for nothing - just write kindly about me in your next report, okay?) Didn't these same four files flummox Chaz The Great? He told me about it. Chaz left a message with Codeplex and then solved the problem by just walking away. Typical journalist, huh. But I'm not like that. I don't walk away. I'm made of the sort of stubborn stuff that becomes an All Black prop. Hence this impassioned plea: GOOD TOWNSFOLK OF CODEPLEX-SANS-SIGNS, WHAT SHOULD I DO NEXT? Can somebody point me to Main Street? How does a simpleton install 'C:\Source\htmlagilitypack\Trunk\HtmlAgilityPack\HtmlDocument.cs'? I'm willing to prostrate myself and grovel to the first kind face that passes in front of my rapidly clouding sight. So help me, I'd even tug my forelock if I had one! Should I hold forth my rod over the wilderness, and create a folder called 'C:\Source\htmlagilitypack\Trunk\HtmlAgilityPack\' or some such? If so, what files should I move into it? ANYTHING else a dum-ass should know about? - and I mean ANYTHING - you just don't know how witless a punch-drunk prop can be.. %( Whenever I've installed other programs they've given me an ".exe" or ".msi" that I can click on and it's all done for me like magic. HEY... there's nothing of that nature here, is there? Am I missing something? Something for dummies to click? (From the waiting rooms of Dr I. Sight Phixes) (signed) The Prop

    Read the article

  • Reading gml in c#

    - by taudorf
    I have a problem with reading some gml files in c#. My files do not have schema or namespaces and looks like file from this question: http://stackoverflow.com/questions/1818147/help-parsing-gml-data-using-c-linq-to-xml only whitout the schema like this: <gml:Polygon srsName='http://www.opengis.net/gml/srs/epsg.xml#4283'> <gml:outerBoundaryIs> <gml:LinearRing> <gml:coord> <gml:X>152.035953</gml:X> <gml:Y>-28.2103190007845</gml:Y> </gml:coord> <gml:coord> <gml:X>152.035957</gml:X> <gml:Y>-28.2102020007845</gml:Y> </gml:coord> <gml:coord> <gml:X>152.034636</gml:X> <gml:Y>-28.2100120007845</gml:Y> </gml:coord> <gml:coord> <gml:X>152.034617</gml:X> <gml:Y>-28.2101390007845</gml:Y> </gml:coord> <gml:coord> <gml:X>152.035953</gml:X> <gml:Y>-28.2103190007845</gml:Y> </gml:coord> </gml:LinearRing> </gml:outerBoundaryIs> </gml:Polygon> When I try to read the document with XDocument.Load method i get an exception saying: 'gml' namespace is not defined. I have a lot of gml files so I do not want to add the schema and namespaces to all my files. Does anybody know how to read my files?

    Read the article

  • Google App Engine with Java - Error running javac.exe compiler

    - by dta
    On Windows XP Just downloaed and unzipped google app engine java sdk to C:\Program Files\appengine-java-sdk I have jdk installed in C:\Program Files\Java\jdk1.6.0_20. I ran the sample application by appengine-java-sdk\bin\dev_appserver.cmd appengine-java-sdk\demos\guestbook\war Then I visited localhost:8080 to find : HTTP ERROR 500 Problem accessing /. Reason: Error running javac.exe compiler Caused by: Error running javac.exe compiler at org.apache.tools.ant.taskdefs.compilers.DefaultCompilerAdapter.executeExternalCompile(DefaultCompilerAdapter.java:473) How to Fix it? My JAVA_HOME points to C:\Program Files\Java\jdk1.6.0_20. I also tried chaning my appcfg.cmd to : @"C:\Program Files\Java\jdk1.6.0_20\bin\java" -cp "%~dp0..\lib\appengine-tools-api.jar" com.google.appengine.tools.admin.AppCfg %* It too didn't work.

    Read the article

  • Python: how do I install SciPy on 64 bit Windows?

    - by Peter Mortensen
    How do I install SciPy on my system? Update 1: for the NumPy part (that SciPy depends on) there is actually an installer for 64 bit Windows: numpy-1.3.0.win-amd64-py2.6.msi (is direct download URL, 2310144 bytes). Running the SciPy superpack installer results in this message in a dialog box: "Cannot install. Python version 2.6 required, which was not found in the registry." I already have Python 2.6.2 installed (and a working Django installation in it), but I don't know about any Registry story. The registry entries seems to already exist: REGEDIT4 [HKEY_LOCAL_MACHINE\SOFTWARE\Python] [HKEY_LOCAL_MACHINE\SOFTWARE\Python\PythonCore] [HKEY_LOCAL_MACHINE\SOFTWARE\Python\PythonCore\2.6] [HKEY_LOCAL_MACHINE\SOFTWARE\Python\PythonCore\2.6\Help] [HKEY_LOCAL_MACHINE\SOFTWARE\Python\PythonCore\2.6\Help\Main Python Documentation] @="D:\\Python262\\Doc\\python262.chm" [HKEY_LOCAL_MACHINE\SOFTWARE\Python\PythonCore\2.6\InstallPath] @="D:\\Python262\\" [HKEY_LOCAL_MACHINE\SOFTWARE\Python\PythonCore\2.6\InstallPath\InstallGroup] @="Python 2.6" [HKEY_LOCAL_MACHINE\SOFTWARE\Python\PythonCore\2.6\Modules] [HKEY_LOCAL_MACHINE\SOFTWARE\Python\PythonCore\2.6\PythonPath] @="D:\\Python262\\Lib;D:\\Python262\\DLLs;D:\\Python262\\Lib\\lib-tk" What I have done so far: Step 1 Downloaded the NumPy superpack installer numpy-1.3.0rc2-win32-superpack-python2.6.exe (direct download URL, 4782592 bytes). Running this installer resulted in the same message, "Cannot install. Python version 2.6 required, which was not found in the registry.". Update: there is actually an installer for NumPy that works - see beginning of the question. Step 2 Tried to install NumPy in another way. Downloaded the zip package numpy-1.3.0rc2.zip (direct download URL, 2404011 bytes), extracted the zip file in a normal way to a temporary directory, D:\temp7\numpy-1.3.0rc2 (where setup.py and README.txt is). I then opened a command line window and: d: cd D:\temp7\numpy-1.3.0rc2 setup.py install This ran for a long time and also included use of cl.exe (part of Visual Studio). Here is a nearly 5000 lines long transcript (230 KB). This seemed to work. I can now do this in Python: import numpy as np np.random.random(10) with this result: array([ 0.35667511, 0.56099423, 0.38423629, 0.09733172, 0.81560421, 0.18813222, 0.10566666, 0.84968066, 0.79472597, 0.30997724]) Step 3 Downloaded the SciPy superpack installer, scipy-0.7.1rc3- win32-superpack-python2.6.exe (direct download URL, 45597175 bytes). Running this installer resulted in the message listed in the beginning Step 4 Tried to install SciPy in another way. Downloaded the zip package scipy-0.7.1rc3.zip (direct download URL, 5506562 bytes), extracted the zip file in a normal way to a temporary directory, D:\temp7\scipy-0.7.1 (where setup.py and README.txt is). I then opened a command line window and: d: cd D:\temp7\scipy-0.7.1 setup.py install This did not achieve much - here is a transcript (about 95 lines). And it fails: >>> import scipy as sp2 Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named scipy Platform: Python 2.6.2 installed in directory D:\Python262, Windows XP 64 bit SP2, 8 GB RAM, Visual Studio 2008 Professional Edition installed. The startup screen of the installed Python is: Python 2.6.2 (r262:71605, Apr 14 2009, 22:46:50) [MSC v.1500 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> Value of PATH, result from SET in a command line window: Path=D:\Perl64\site\bin;D:\Perl64\bin;C:\Program Files (x86)\PC Connectivity Solution\;D:\Perl\site\bin;D:\Perl\bin;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\Program Files (x86)\ATI Technologies\ATI.ACE\Core-Static;d:\Program Files (x86)\WinSCP\;D:\MassLynx\;D:\Program Files (x86)\Analyst\bin;d:\Python262;d:\Python262\Scripts;D:\Program Files (x86)\TortoiseSVN\bin;D:\Program Files\TortoiseSVN\bin;C:\WINDOWS\system32\WindowsPowerShell\v1.0;D:\Program Files (x86)\IDM Computer Solutions\UltraEdit\

    Read the article

  • Add file using SharpSVN

    - by jan
    Hi, I would like to add all unversioned files under a directory to SVN using SharpSVN. I tried regular svn commands on the command line first: C:\temp\CheckoutDir> svn status -v I see all subdirs, all the files that are already checked in, a few new files labeled "?", nothing with the "L" lock indication C:\temp\CheckoutDir> svn add . --force This results in all new files in the subdirs ,that are already under version control themselves, to be added. I'd like to do the same using SharpSVN. I copy a few extra files into the same directory and run this code: ... using ( SharpSvn.SvnClient svn = new SvnClient() ) { SvnAddArgs saa = new SvnAddArgs(); saa.Force = true; saa.Depth = SvnDepth.Infinity; try { svn.Add(@"C:\temp\CheckoutDir\." , saa); } catch (SvnException exc) { Log(@"SVN Exception: " + exc.Message + " - " + exc.File); } } But an SvnException is raised: SvnException.Message: Working copy 'C:\temp\CheckoutDir' locked SvnException.File: ..\..\..\subversion\libsvn_wc\lock.c" No other svnclient instance is running in my code, I also tried calling svn.cleanup() right before the Add, but to no avail. Since the documentation is rather vague ;), I was wondering if anyone here knew the answer. Thanks in advance! Jan

    Read the article

  • Is there a way of recover from an Exception in Directory.EnumerateFiles ?

    - by Magnus Johansson
    In .NET 4, there's this Directory.EnumerateFiles() method with recursion that seems handy. However, if an Exception occurs within a recursion, how can I continue/recover from that and continuing enumerate the rest of the files? try { var files = from file in Directory.EnumerateFiles(@"c:\", "*.*", SearchOption.AllDirectories) select new { File = file }; Console.WriteLine(files.Count().ToString()); } catch (UnauthorizedAccessException uEx) { Console.WriteLine(uEx.Message); } catch (PathTooLongException ptlEx) { Console.WriteLine(ptlEx.Message); }

    Read the article

  • SQL Server 2008 log issue

    - by George2
    Hello everyone, I am using SQL Server 2008 Enterprise. Under logs folder, in my machine it is C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\Log, there are three kinds of files, ERRORLOG, ERRORLOG.1, ERRORLOG.2 ... ERRORLOG.6; FDLAUNCHERRORLOG, FDLAUNCHERRORLOG.1, FDLAUNCHERRORLOG.2, ... FDLAUNCHERRORLOG.6; log_207.trc, log_208.trc, ... My question is what are the differnet function of such log files? And why there are files ends with .1, .2, etc? thanks in advance, George

    Read the article

< Previous Page | 468 469 470 471 472 473 474 475 476 477 478 479  | Next Page >