Search Results

Search found 27870 results on 1115 pages for 'standard output'.

Page 631/1115 | < Previous Page | 627 628 629 630 631 632 633 634 635 636 637 638  | Next Page >

  • SSH broken after hostname change on EC2-hosted Ubuntu

    - by dimadima
    I changed my instance's hostname using the hostname utility and then set it in /etc/hostname so that the new name survives reboot. My main motivation was for differentiating between instances at the prompt using the \h format in PS1. EDIT I also changed permissions on my home directory. I made my home directory group writeable. END EDIT Now I can no longer SSH into the machine. The short of it is the error Permission denied (publickey). Running ssh -v, the more verbose output is: debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /Users/dmitry/.ssh/id_rsa debug1: Authentications that can continue: publickey debug1: Trying private key: /Users/dmitry/.ssh/ec2key.pem debug1: read PEM private key done: type RSA debug1: Authentications that can continue: publickey debug1: No more authentication methods to try. Permission denied (publickey). Should I have done something after changing the hostname? Now I can't get into the instance! :(

    Read the article

  • Can't bind spawn-fcgi to address

    - by Xeoncross
    Following some nice instructions I am almost through setting up PHP to run on nginx. However, every time I try to start spawn-fcgi I get an error message demo@desktop:/usr/bin$ sudo /etc/init.d/php-fastcgi start spawn-fcgi: bind failed: Cannot assign requested address My /etc/init.d/php-fastcgi startup script is: #!/bin/bash PHP_SCRIPT=/usr/bin/php-fastcgi FASTCGI_USER=demo RETVAL=0 case "$1" in start) su - $FASTCGI_USER -c $PHP_SCRIPT RETVAL=$? ;; stop) killall -9 php5-cgi RETVAL=$? ;; restart) killall -9 php5-cgi su - $FASTCGI_USER -c $PHP_SCRIPT RETVAL=$? ;; *) echo "Usage: php-fastcgi {start|stop|restart}" exit 1 ;; esac exit $RETVAL console output which loads /usr/bin/php-fastcgi #!/bin/sh /usr/bin/spawn-fcgi -a 127.0.0.1 -p 9000 -C 6 -u demo -f /usr/bin/php5-cgi One thing to note is that I am running the PHP cgi as the user "demo" which is my account.

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 20 (sys.dm_tran_locks)

    - by Tamarick Hill
    The sys.dm_tran_locks DMV is used to return active lock resources on your server. Locking is a mechanism used by SQL Server to protect the integrity of data when you have multiple users that may potentially access the same data at the same time. Let’s run a query against this DMV so we can analyze the results. SELECT * FROM sys.dm_tran_locks As we can see, its a lot of lock information returned from this DMV. I will not go into detail about each of the columns returned, but I will touch on the ones that I feel are the most important. The first column in the output is the resource_type column which tells you the type of lock a particular row represents. It could be a PAGE lock, RID, OBJECT, DATABASE, or several other lock types. The resource_database_id represents the id of the database for a particular lock resource. The resource_lock_partition column represents the ID of a lock partition. When you have a table that is partitioned, locks can be escalated to the partition level before going to a table level lock. The request_mode column gives us information about the type of lock that is being requested. From the screenshots above we see RangeS-S locks which represent a share range lock and IS locks which represent Intent Shared locks. The request_status column displays whether the lock has been granted or whether the lock is waiting to be acquired. The request_session_id  shows the session_id that is requesting the lock. This DMV is the best place to go when you need to identify the exact locks that are being held or pending for individual requests. You might need this information when you are troubleshooting severe blocking or deadlocking problems on your server. For more information on this DMV, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms190345.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • Linux Mint something wrong with my .bashrc

    - by user2309862
    The path of my .basrc file is /home/vamsi/.bashrc It is weird that my file has nothing but the path I set. I think I am using a file at the wrong location or that I have lost my .bashrc file as none of the environment variables set here seem to work. #ANDROID_DEV ANDROID_HOME=/opt/android-sdk-linux export ANDROID_HOME PATH= $PATH:$ANDROID_SDK_HOME/tools export PATH PATH=$PATH:$ANDROID_HOME/platform-tools export PATH PATH=$PATH:$ANDROID_HOME/build-tools export PATH #MAVEN-PATH M2_HOME=/opt/apache-maven-3.1.0 export M2_HOME M2=$PATH:$M2_HOME/bin export M2 I was prompted to install maven2 in order to use mvn, but the android command cannot be found. Could you please help me find a solution to this issue. EDIT: Meanwhile,I tried this: export PATH=${PATH}:/opt/android-sdk-linux/platform-tools export PATH=${PATH}:/opt/android-sdk-linux/tools Now,the output of $PATH echoes: bash: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/opt/android-sdk-linux/platform-tools:/opt/android-sdk-linux/build-tools: No such file or directory

    Read the article

  • Computer starts, POSTs normal, no video

    - by Annath
    So, I recently tried to start up an old computer of mine. It would not power on, so I replaced the PSU. When I powered it on, all the fans spun up and the POST beeps indicated a normal startup. The problem was there was no video output. Every couple of minutes, I hear the post code again. I think it is restarting after POST, but without video I have no idea why. Does anyone have an idea as to why this is happening, and how to fix it? EDIT: The video card is a PCI-e card. There is no integrated graphics on the motherboard. If I remove the video card, the POST indicates a missing card. When I put it back, it goes back to normal, so I know that it recognizes that the card exists.

    Read the article

  • Shutdown in background - PHP

    - by William
    I'm trying to shutdown an Ubuntu machine from PHP and am running into an issue if I want to delay the shutdown. The PHP line I'm using is: exec("sudo shutdown -h +5 &", $output); Where 5 is however many minutes in the future I want to shutdown. My problem is that this won't background and Apache hangs until either the machine is shutdown or someone else cancels the shutdown. shell_exec() has the same result. Is there another way to do this that will return immediately?

    Read the article

  • Toshiba Satellite laptop connected to HDTV

    - by VANJ
    I have a Toshiba Satellite A505-S6014 laptop running Windows 7 64-bit connected to a Toshiba Regza 42" HDTV with an HDMI cable. The laptop has a 16" display and the screen resolution is set to the maximum/recommended of 1366x768. The display output is set to "LCD+HDMI". The display looks fine on the laptop screen but on the TV it is not a "full screen" display, it leaves a good 3" black border all around all 4 edges. When I switch the display to "HDMI only", it is now too big for the TV screen and some of my desktop icons are no longer visible off to the side. What is the best way to set this up? I guess that since a 16" and 42" displays have different native resolutions, a LCD+HDMI mode is defaulting to the optimal size for the 16". But when I set it to HDMI Only, what is the appropriate resolution for a 42" full screen display? Thanks

    Read the article

  • Windows 8 recimg createimage fails with 0x80070002, why?

    - by ezuk
    Running Win8 x64. C is an SSD with lots of free space (200GB). I run: recimg /createimage C:\CustomRefreshImages\Img121105 Output says: Source OS location: C: Recovery image path: C:\CustomRefreshImages\Img121105\CustomRefresh.wim Creating recovery image. Press [ESC] to cancel. Initializing The recovery image cannot be written. Error Code - 0x80070002 The paths specified in the command line are created, and I'm running this in an elevated command prompt, so it's not a permissions issue. I've googled this but could only find worthless results, including a particularly entertaining one where the "solution" included instructions for creating a screenshot (press PrtScreen). Any help would be most appreciated.

    Read the article

  • April 2010 Critical Patch Update Released

    - by eric.maurice
    Hi, this is Eric Maurice. Today Oracle released the April 2010 Critical Patch Update (CPUApr2010),the first one to include security fixes for Oracle Solaris. Today's Critical Patch Update (CPU) provides 47 new security fixes across the following product families: Oracle Database Server, Oracle Fusion Middleware, Oracle Collaboration Suite, Oracle E-Business Suite, Oracle PeopleSoft Enterprise, Oracle Life Sciences, Retail, and Communications Industry Suites, and Oracle Solaris. 28 of these 47 new vulnerabilities are remotely exploitable without authentication, but the criticality of the affected components and the severity of these vulnerabilities vary greatly. Customers should, as usual, refer to the Risk Matrices in the CPU Advisory to assess the relevance of these fixes for their environment (and the urgency with which to apply the fixes). 7 of the 47 new vulnerabilities affect various versions of Oracle Database Server. None of these 7 vulnerabilities are remotely exploitable without authentication. Furthermore, none of these fixes are applicable to client-only deployments. The most severe CVSS Base Score for the Database Server vulnerabilities is 7.1. As a reminder, information about Oracle's use of the CVSS 2.0 standard can be found in Note 394487.1 (My Oracle Support subscription required). Note that this Critical Patch Update includes fixes for vulnerabilities that were publicly disclosed by David Litchfield at the BlackHat DC Conference in early February (CVE-2010-0866 and CVE-2010-0867). 5 of the 47 new vulnerabilities affect various components of the Oracle Fusion Middleware product family. The highest CVSS Base Score for these vulnerabilities is 7.5. Note that the patches for Oracle WebLogic Server are cumulative and this Critical Patch Update therefore also includes a fix for a vulnerability (CVE-2010-0073) that was the subject of a Security Alert issued by Oracle on February 4, 2010. Customers, who have not applied the previously-released patch, should apply today's Critical Patch Update as soon as possible. As stated at the beginning of this blog, it is also noteworthy to highlight that this Critical Patch Update provides 16 new fixes for the Sun product line. With the recent close of the Sun acquisition both security organizations have worked diligently to align Sun's previous security practices with Oracle's. Java users know that Oracle released a Critical Patch Update for Java SE and Java For Business earlier this month (in accordance with the Java patching schedule previously published by Sun Microsystems). Please note that for the first time, the Java advisories included CVSS Scores to help assess the severity of the new vulnerabilities fixed with the advisory. The rapid inclusion of the Solaris product lines in the Critical Patch Update and the extension of Oracle Software Security Assurance to Sun technologies are evidence of the flexibility of Oracle's security assurance programs. These should also result in tangible security benefits for the users of the Oracle hardware and software stack (such as a predictable patching schedule for all Oracle products).

    Read the article

  • How to use qcow2 disk image in Linux?

    - by sauparna
    I have a large qcow2 formatted disk image, which I use as storage. Often I need to move data to and from this disk image. I mount the disk using the qemu-nbd tool as follows: modprobe nbd max_part=63 qemu-nbd -c /dev/nbd0 /host/disk100G.img mount /dev/nbd0p1 /home/rup/disk But disk access fails every now and then in the midst of some I/O operation with an "Input/output error". At that point I have to manually unmount the disk and re-mount it so that I can run the program again: qemu-nbd -d /dev/nbd0 umount joborkhaki/ What could be the reason for this? Is there a better tool that I can use to maintain a qcow2 disk image?

    Read the article

  • Ext3 partition doesn't mount on Snow Leopard using MacFUSE

    - by Fez
    I'm dual-booting OS X and Ubuntu on a Macbook 4,1. I'm trying to mount my Linux partition in OS X. I installed MacFUSE 2.0.3,2 and fuse-ext2-0.0.7 on Snow Leopard 10.6.5. I created the directory /Volumes/Ubuntu and tried to mount the disk there using the command: fuse-ext2 /dev/disk0s4 /Volumes/Ubuntu/ This is the output I get: fuse-ext2: version:'0.0.7', fuse_version:'27' [main (../../fuse-ext2/fuse-ext2.c:324)] fuse-ext2: enter [do_probe (../../fuse-ext2/do_probe.c:30)] fuse-ext2: Error while trying to open /dev/disk0s4 (rc=13) [do_probe (../../fuse-ext2/do_probe.c:34)] fuse-ext2: Probe failed [main (../../fuse-ext2/fuse-ext2.c:340)] Any clue what's going wrong? Thanks!

    Read the article

  • Connection problem between redmine and svn

    - by WombaT
    I have server controlled by Debian 6.0. I installed and configured redmine some time ago, and now configured svn server. Now i'm trying to configure redmine to be able to view svn repository. URL is: https://192.168.11.78/svn/bee Connection is not working, log show this error: Error parsing svn output: #<REXML::ParseException: No close tag for /lists/list> Google says that its common error, and its possible to fix it by permamently accept of server certificate so i did it and nothing. Still dont work. Later, i added [global] store-plaintext-passwords = no in file .subversion/servers I did this (and cert accept) for both root and www-data users. Nothing helped, still got error in redmine The entry or revision was not found in the repository. What else i can do with it?

    Read the article

  • WinXP How to Tunnel LPT over USB

    - by Michael Pruitt
    I have a windows program that accesses a device connected to a LPT (1-3) 25 pin port. The communication is bidirectional, and I suspected the control lines are also accessed directly. I would like to migrate the device to a machine that does not have a LPT port. I saw the dos2usb software, but that takes the output (from a DOS program) and 'prints' it formatted for a specific printer. I need a raw LPT connection, and a cable that provides access to all the control signals. I do have a USB to 36-pin Centronics that may have the extra signals. I use it with a vinyl cutter that doesn't like most of the USB dongles. It comes up as USB001. Would adding and sharing a generic printer, then mapping LPT1 to the share get me closer? Would that work for a parallel port scanner? My preferred solution is a USB cable with a driver that will map it to LPT1, LPT2, or LPT3.

    Read the article

  • The remote host closed the connection. The error code is 0x80070057

    - by Jalpesh P. Vadgama
    While creating a PDF or any file with asp.net pages I was getting following error. Exception Type:System.Web.HttpException The remote host closed the connection. The error code is 0x80072746. at System.Web.Hosting.ISAPIWorkerRequestInProcForIIS6.FlushCore(Byte[] status, Byte[] header, Int32 keepConnected, Int32 totalBodySize, Int32 numBodyFragments, IntPtr[] bodyFragments, Int32[] bodyFragmentLengths, Int32 doneWithSession, Int32 finalStatus, Boolean& async) at System.Web.Hosting.ISAPIWorkerRequest.FlushCachedResponse(Boolean isFinal) at System.Web.Hosting.ISAPIWorkerRequest.FlushResponse(Boolean finalFlush) at System.Web.HttpResponse.Flush(Boolean finalFlush) at System.Web.HttpResponse.Flush() at System.Web.UI.HttpResponseWrapper.System.Web.UI.IHttpResponse.Flush() at System.Web.UI.PageRequestManager.RenderFormCallback(HtmlTextWriter writer, Control containerControl) at System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection children) at System.Web.UI.Control.RenderChildren(HtmlTextWriter writer) at System.Web.UI.HtmlControls.HtmlForm.RenderChildren(HtmlTextWriter writer) at System.Web.UI.HtmlControls.HtmlForm.Render(HtmlTextWriter output) at System.Web.UI.Control.RenderControlInternal(HtmlTextWriter writer, ControlAdapter adapter) at System.Web.UI.Control.RenderControl(HtmlTextWriter writer, ControlAdapter adapter) at System.Web.UI.HtmlControls.HtmlForm.RenderControl(HtmlTextWriter writer) at System.Web.UI.HtmlFormWrapper.System.Web.UI.IHtmlForm.RenderControl(HtmlTextWriter writer) at System.Web.UI.PageRequestManager.RenderPageCallback(HtmlTextWriter writer, Control pageControl) at System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection children) at System.Web.UI.Control.RenderChildren(HtmlTextWriter writer) at System.Web.UI.Page.Render(HtmlTextWriter writer) at System.Web.UI.Control.RenderControlInternal(HtmlTextWriter writer, ControlAdapter adapter) at System.Web.UI.Control.RenderControl(HtmlTextWriter writer, ControlAdapter adapter) at System.Web.UI.Control.RenderControl(HtmlTextWriter writer) at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) Exception Type:System.Web.HttpException The remote host closed the connection. The error code is 0x80072746. at System.Web.Hosting.ISAPIWorkerRequestInProcForIIS6.FlushCore(Byte[] status, After searching and analyzing I have found that client was disconnected and still I am flushing the response which I am doing for creating PDF files from the stream. To fix this kind of error we can use Response.IsClientConnected property to check whether client is connected or not and then we can flush and end response from client. Here is the sample code to fix that problem. if (Response.IsClientConnected) { Response.Flush(); Response.End(); } That’s it Hope this will help you..Stay tuned for more.. Till that Happy Programming!! Technorati Tags: Exception,ASp.NET

    Read the article

  • Read only file system

    - by Jack Moon
    I'm running Ubuntu 12.10, Upon opening any shell I get the following error: /home/jack/.rbenv/libexec/rbenv-init: line 87: cannot create temp file for here-document: Read-only file system I realised this wasn't simply a rbenv issue, as any file I try to write to returns an error saying the system is Read-only. I don't know how else to describe my problem, each time I boot up the system goes through a disk check, where it supposedly fixes several errors in my disk. Here is my /etc/fstab # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sda1 during installation UUID=1cc4b2ab-a984-4516-ac25-6d64f5050244 / ext4 errors=remount-ro 0 1 # swap was on /dev/sda5 during installation UUID=4e0dfeae-701a-43ce-b5c6-65f15ab3d8e3 none swap sw 0 0 The entire file system is read-only. I've tried the following sudo fsck.ext4 -f /dev/sda1 which gave the following (shortened) output /dev/sda1: ***** FILE SYSTEM WAS MODIFIED ***** /dev/sda1: ***** REBOOT LINUX ***** /dev/sda1: 1257080/45268992 files (1.0% non-contiguous), 50696803/181051904 blocks

    Read the article

  • Middle Mouse Button does not work in XFCE / Arch Linux

    - by Alp
    I have the XFCE desktop manager installed on my Arch Linux system. With E17 (Enlightenment desktop manager) i had no problems with my mouse: all buttons worked correctly out of the box. But in XFCE my middle mouse button does not fire an event at all (no output with xev). Evdev seems to identify my mouse correctly (Razer Deathadder) because it echoes its name in the xorg logs. I have no idea what could cause this and how to debug the problem. I start both e17 and xfce with startx. Here is my ~/.xinitrc: exec startxfce4 --with-ck-launch #exec enlightenment-start

    Read the article

  • Windows: "net use" equivalent that shows the username used to mount a share?

    - by Jeroen Wiert Pluimers
    Sometimes you need to use different credentials to map different shares, for instance net use z: \\myserver\myshare /user:mydomain\myusername mypassword You can use net use to show you the shares that are mapped, but it does not show the mydomain\myusername. What can I use to show that information? I'm looking for output like this from the imaginary command NetUseX: H:\>NetUseX New connections will be remembered. Status Local Remote Username Network ---------------------------------------------------------------------------------------------------------- OK D: \\MyServer1234\SomeData MyServer1234\MyUserA Microsoft Windows Network Unavailable E: \\MyServer4321\SomeApps MyDomain\MyUserB Microsoft Windows Network OK H: \\MyServer4321\HomeDAta MyDomain\MyUserB Microsoft Windows Network Disconnected W: \\MyServer6789\WorkData MyDomain\MyUserB Microsoft Windows Network OK \\MyServer9876\Shortcuts MyServer9876\MyUserC Microsoft Windows Network The command completed successfully.

    Read the article

  • PXE boot very slow when PXE server is virtualbox

    - by sqrtsben
    As I read in questions here and on the Internet, PXE and Virtualbox don't seem to like each other too much. My problem is the following: I have a virtualized machine which hosts the DHCP and PXE server for 10 native clients. They are rebooted roughly every 10 mins and on each reboot, they need to boot a small linux (the initrd is ~4MB). Before, I had a native machine running and booting via PXE was very fast. Now, looking at the output of nload, I only get 500kbit/s whenever one machine is booting. The machines are connected via a GBit switch, so that can't be it. Also, when testing the connection speed to the outside, I have the full bandwidth available. Is VBox just unable to deal with large amounts of UDP packets? Can anyone point me in the right direction here?

    Read the article

  • debian squeeze: where do the logs for sysv init scripts go? (why won't my init script work)

    - by sbeam
    my actual problem is trying to debug a init script to start Resque. It works fine run as root from the command line, but does nothing on boot. It has some proper insserv headers and I've run updaterc.d to create the symlinks, and checked that they exist. The script is +x. # find /etc/rc*.d -name \*resque\* /etc/rc0.d/K01resque /etc/rc1.d/K01resque /etc/rc2.d/S01resque /etc/rc3.d/S01resque /etc/rc4.d/S01resque /etc/rc5.d/S01resque /etc/rc6.d/K01resque # ls -l /etc/init.d/resque -rwxr-xr-x 1 root root 2093 Oct 24 03:02 /etc/init.d/resque the script can be viewed here if you like. It uses lsb functions to log messages, which essentially echo() to STDOUT I believe. So where does the output go during startup? It's not in /var/log/*log

    Read the article

  • Reading from a staging 2D texture array in DirectX10

    - by Don Reba
    I have a DX10 program, where I create an array of 3 16x16 textures, then map, read, and unmap each subresource in turn. I use a single mip level, set resource usage to staging and CPU access to read. Now, here is the problem: Subresource 0 contains 1024 bytes, pitch 64, as expected. Subresource 1 contains 512 bytes, pitch 64. Subresource 2 contains 256 bytes, pitch 64. I expect all three to be the same size. Debugging output is enabled, but not reporting any warnings or errors. Am I missing something, or might this be some sort of driver issue? Here is the code. The language is Nemerle, but C# and C++ would look almost the same. I have looked through the generated code, and am fairly confident the problem is not language-related. def cpuTexture = Texture2D ( device , Texture2DDescription() <- { Width = 16; Height = 16; MipLevels = 1; ArraySize = 3; Format = Format.R32_Float; Usage = ResourceUsage.Staging; CpuAccessFlags = CpuAccessFlags.Read; SampleDescription = SampleDescription(count = 1, quality = 0); } ); foreach (subresource in [0 .. 2]) { def data = cpuTexture.Map(subresource, MapMode.Read, MapFlags.None); Console.WriteLine($"subresource $subresource"); Console.WriteLine($"length = $(data.Data.Length)"); Console.WriteLine($"pitch = $(data.Pitch)"); cpuTexture.Unmap(subresource); }

    Read the article

  • iptables rules keep showing up

    - by Omriko
    I just installed an ubuntu precise server, after a few weird communications issues I checked the iptables list and found: Chain INPUT (policy DROP) target prot opt source destination ACCEPT all -- anywhere anywhere ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT tcp -- 10.0.0.0/24 anywhere tcp spts:1024:65535 dpt:ssh state NEW ACCEPT icmp -- anywhere anywhere state NEW ACCEPT icmp -- anywhere anywhere state NEW ACCEPT icmp -- anywhere anywhere state NEW ACCEPT icmp -- anywhere anywhere state NEW DROP tcp -- anywhere anywhere tcp dpt:10520 state NEW DROP udp -- anywhere anywhere udp spts:1:65535 dpt:31337 state NEW DROP udp -- anywhere anywhere udp spts:1:65535 dpt:31338 state NEW DROP udp -- anywhere anywhere udp spts:1:65535 dpt:54320 state NEW DROP udp -- anywhere anywhere udp spts:1:65535 dpt:54321 state NEW DROP tcp -- anywhere anywhere tcp dpt:12345 state NEW DROP tcp -- anywhere anywhere tcp dpt:12346 state NEW DROP tcp -- anywhere anywhere tcp dpt:20034 state NEW DROP tcp -- anywhere anywhere tcp dpt:16600 state NEW DROP tcp -- anywhere anywhere tcp dpt:16660 state NEW DROP tcp -- anywhere anywhere tcp dpt:65000 state NEW DROP udp -- anywhere anywhere udp dpt:34555 state NEW DROP udp -- anywhere anywhere udp dpt:35555 state NEW DROP udp -- anywhere anywhere udp spts:netbios-ns:netbios-dgm dpts:netbios-ns:netbios-dgm state NEW DROP tcp -- anywhere anywhere tcp spts:1024:65535 dpt:netbios-ssn state NEW DROP tcp -- anywhere anywhere tcp spts:1024:65535 dpt:microsoft-ds state NEW DROP udp -- anywhere anywhere udp spt:microsoft-ds dpt:microsoft-ds state NEW DROP udp -- anywhere anywhere udp spts:1024:65535 dpt:microsoft-ds state NEW DROP tcp -- anywhere anywhere tcp spts:1024:65535 dpt:loc-srv state NEW DROP tcp -- anywhere anywhere tcp spts:1024:65535 dpt:5000 state NEW DROP tcp -- anywhere anywhere tcp spts:1024:65535 dpts:1025:1029 state NEW DROP udp -- anywhere anywhere udp spts:1:65535 dpt:loc-srv state NEW ACCEPT tcp -- anywhere anywhere tcp spts:1024:65535 dpt:28082 state NEW DROP all -- anywhere anywhere state NEW Chain FORWARD (policy DROP) target prot opt source destination Chain OUTPUT (policy DROP) target prot opt source destination ACCEPT all -- anywhere anywhere ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT tcp -- anywhere anywhere tcp spts:tcpmux:65535 dpts:tcpmux:65535 state NEW ACCEPT udp -- anywhere anywhere udp dpts:1:65535 state NEW ACCEPT icmp -- anywhere anywhere state NEW ACCEPT tcp -- anywhere anywhere tcp spts:1024:65535 dpt:28082 state NEW DROP all -- anywhere anywhere state NEW I tried to wipe the rules, I disabled UFW, Ive rewritten and saved iptables rules according to this guide, but every minute or so the old rules return.... I checked crontab for scheduled tasks, there is nothing in there but still these rules appear every minute... please help!

    Read the article

  • Cloud Computing : publication du volet 3 du Syntec Numérique

    - by Eric Bezille
    Une vision client/fournisseur réunie autour d'une ébauche de cadre contractuel Lors de la Cloud Computing World Expo qui se tenait au CNIT la semaine dernière, j'ai assisté à la présentation du nouveau volet du Syntec numérique sur le Cloud Computing et les "nouveaux modèles" induits : modèles économiques, contrats, relations clients-fournisseurs, organisation de la DSI. L'originalité de ce livre blanc vis à vis de ceux déjà existants dans le domaine est de s'être attaché à regrouper l'ensemble des acteurs clients (au travers du CRIP) et fournisseurs, autour d'un cadre de formalisation contractuel, en s'appuyant sur le modèle e-SCM. Accélération du passage en fournisseur de Services et fin d'une IT en silos ? Si le Cloud Computing permet d'accélérer le passage de l'IT en fournisseur de services (dans la suite d'ITIL v3), il met également en exergue le challenge pour les DSI d'un modèle en rupture nécessitant des compétences transverses permettant de garantir les qualités attendues d'un service de Cloud Computing : déploiement en mode "self-service" à la demande, accès standardisé au travers du réseau,  gestion de groupes de ressources partagées,  service "élastique" : que l'on peut faire croitre ou diminuer rapidement en fonction de la demande mesurable On comprendra bien ici, que le Cloud Computing va bien au delà de la simple virtualisation de serveurs. Comme le décrit fort justement Constantin Gonzales dans son blog ("Three Enterprise Principles for Building Clouds"), l'important réside dans le respect du standard de l'interface d'accès au service. Ensuite, la façon dont il est réalisé (dans le nuage), est de la charge et de la responsabilité du fournisseur. A lui d'optimiser au mieux pour être compétitif, tout en garantissant les niveaux de services attendus. Pour le fournisseur de service, bien entendu, il faut maîtriser cette implémentation qui repose essentiellement sur l'intégration et l'automatisation des couches et composants nécessaires... dans la durée... avec la prise en charge des évolutions de chacun des éléments. Pour le client, il faut toujours s'assurer de la réversibilité de la solution au travers du respect des standards... Point également abordé dans le livre blanc du Syntec, qui rappelle les points d'attention et fait un état des lieux de l'avancement des standards autour du Cloud Computing. En vous souhaitant une bonne lecture...

    Read the article

  • Pure Front end JavaScript with Web API versus MVC views with ajax

    - by eyeballpaul
    This was more a discussion for what peoples thoughts are these days on how to split a web application. I am used to creating an MVC application with all its views and controllers. I would normally create a full view and pass this back to the browser on a full page request, unless there were specific areas that I did not want to populate straight away and would then use DOM page load events to call the server to load other areas using AJAX. Also, when it came to partial page refreshing, I would call an MVC action method which would return the HTML fragment which I could then use to populate parts of the page. This would be for areas that I did not want to slow down initial page load, or areas that fitted better with AJAX calls. One example would be for table paging. If you want to move on to the next page, I would prefer it if an AJAX call got that info rather than using a full page refresh. But the AJAX call would still return an HTML fragment. My question is. Are my thoughts on this archaic because I come from a .net background rather than a pure front end background? An intelligent front end developer that I work with, prefers to do more or less nothing in the MVC views, and would rather do everything on the front end. Right down to web API calls populating the page. So that rather than calling an MVC action method, which returns HTML, he would prefer to return a standard object and use javascript to create all the elements of the page. The front end developer way means that any benefits that I normally get with MVC model validation, including client side validation, would be gone. It also means that any benefits that I get with creating the views, with strongly typed html templates etc would be gone. I believe this would mean I would need to write the same validation for front end and back end validation. The javascript would also need to have lots of methods for creating all the different parts of the DOM. For example, when adding a new row to a table, I would normally use the MVC partial view for creating the row, and then return this as part of the AJAX call, which then gets injected into the table. By using a pure front end way, the javascript would would take in an object (for, say, a product) for the row from the api call, and then create a row from that object. Creating each individual part of the table row. The website in question will have lots of different areas, from administration, forms, product searching etc. A website that I don't think requires to be architected in a single page application way. What are everyone's thoughts on this? I am interested to hear from front end devs and back end devs.

    Read the article

  • samba not starting on ubuntu

    - by Mirage
    I have this output user123@Matrix-Server:~$ /etc/init.d/samba stop bash: /etc/init.d/samba: No such file or directory sputnik@Matrix-Server:~$ sudo /etc/init.d/samba restart sudo: /etc/init.d/samba: command not found user123@Matrix-Server:~$ user123@Matrix-Server:~$ sudo apt-get install samba smbfs Reading package lists... Done Building dependency tree Reading state information... Done samba is already the newest version. smbfs is already the newest version. The following packages were automatically installed and are no longer required: linux-headers-2.6.32-19-generic linux-headers-2.6.32-19 Use 'apt-get autoremove' to remove them. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.

    Read the article

  • Throughput tool with decent graphing.

    - by Cory J
    I've been looking through some of the tools available for measuring network throughput, namely iperf, bwping, ttcp, etc. I am planning on doing throughput tests over a long period of time, so what I really need is good graphing output, preferably rrd graphs. The Jperf frontend for iperf will generate a graph, and bmon has a nice command-line graph, but these simply count seconds since the test was started. I am trying to measure trends in throughput over times of the day, so a graph with times and days is necessary. So a way to get iperf to log to RRDs would be best, if this isn't possible could someone point me toward another solution?

    Read the article

< Previous Page | 627 628 629 630 631 632 633 634 635 636 637 638  | Next Page >