Search Results

Search found 21454 results on 859 pages for 'via'.

Page 166/859 | < Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >

  • Can I set up a 2nd home wireless router, with router2 connecting to the internet through a desktop which is wirelessly connected to router1?

    - by gil b.
    Hi, I apologize for the crudeness of my MSPaint drawing, but please view my diagram of what I'd like to accomplish: Proposed home network architecture Currently, all devices are connected to 1 wireless router. I would like to make my own subnet, with a box in-between my subnet and the shared wireless router, so that I can learn about IDS, traffic analysis, etc. I was also given a cisco PIX firewall to play around with, and it'd be an added bonus if I could incorporate that into my network. The reason for this proposed architecture is so that I can monitor all MY traffic, without seeing anything going on with my roommates' traffic. my MAIN Question is, is it possible to have my desktop connect to the wireless router with internet via wireless card AND share that connection via the ethernet card, hooked to wireless router 2? cable modem - wireless router - desktop pc connected wirelessly - wireless router 2 getting internet from wired connection to desktop pc - laptops connected wirelessly The PIX can be left out for now, but I'm wondering if it could eventually be incorporated? THANKS!

    Read the article

  • Routing tables don't show ppp0 after 12.04 kernel upgrade to 3.5.0: Haier CE682 modem configuration

    - by ubunsteve
    I'm trying to get my Haier CE682 EVDO modem, model number 201e:1022 to work in ubuntu 12.04 kernel 3.5.0-030500-generic #201207211835 . I had it working in a previous 12.04 kernel, using compat-wireless and these instructions http://zulkhamsyahmh.blogspot.com/2012/05/install-smartfren-haier-ce682-on-ubuntu.html, and to get it working had to edit the routing tables so that there was a ppp0 showing up, as suggested at http://www.linuxquestions.org/questions/slackware-14/wvdial-is-connecting-but-im-unable-to-do-anything-714861/ Network manager doesn't work with this modem, so I use either wvdial or gpppon to connect to it, both which work (after I run the command sudo modprobe usbserial vendor=0x201e product=0x1022 ) This is the output of when I connect with gpppon to the modem: Using interface ppp0 Connect: ppp0 <-- /dev/ttyUSB0 sent [LCP ConfReq id=0x1 ] rcvd [LCP ConfAck id=0x1 ] rcvd [LCP ConfReq id=0x2 ] sent [LCP ConfAck id=0x2 ] sent [LCP EchoReq id=0x0 magic=0x819c86db] rcvd [CHAP Challenge id=0x1 <1ac8f12799e953967a3cc222c9254690, name = ""] sent [CHAP Response id=0x1 <6f12a903dc40915ca2761c17b87f8fbd, name = "smart"] rcvd [LCP EchoRep id=0x0 magic=0x0] rcvd [CHAP Success id=0x1 ""] CHAP authentication succeeded CHAP authentication succeeded sent [CCP ConfReq id=0x1 ] sent [IPCP ConfReq id=0x1 ] rcvd [IPCP ConfReq id=0x1 ] sent [IPCP ConfAck id=0x1 ] rcvd [CCP ConfReq id=0x1] sent [CCP ConfAck id=0x1] rcvd [CCP ConfRej id=0x1 ] sent [CCP ConfReq id=0x2] rcvd [IPCP ConfRej id=0x1 ] sent [IPCP ConfReq id=0x2 ] rcvd [CCP ConfAck id=0x2] rcvd [IPCP ConfNak id=0x2 ] sent [IPCP ConfReq id=0x3 ] rcvd [IPCP ConfAck id=0x3 ] not replacing existing default route via 192.168.3.1 local IP address 10.191.248.154 remote IP address 10.17.95.25 primary DNS address 10.17.3.244 secondary DNS address 10.17.3.245 as you can see there is a problem with "not replacing existing default route via 192.168.3.1" This it the out put of route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface default 192.168.3.1 0.0.0.0 UG 0 0 0 wlan0 link-local * 255.255.0.0 U 1000 0 0 wlan0 192.168.3.0 * 255.255.255.0 U 2 0 0 wlan0 I had tried these commands, which had previously worked in the earlier kernel: route del default route add default ppp0 but that broke my wireless internet connection. I then added the default routing as shown above with sudo route add default gw 192.168.3.1 wlan0 So it seems I need to add or change the routing to show a ppp0 connection, but I don't know how to do that.

    Read the article

  • Wireless bridge between two prolink adsl modem/router

    - by MyName
    Allright, so i've got 2 prolink hurricane h5004n. Its a broadband adsl modem and router. My Pc is connected to the first one via ethernet. What i want to do is a wireless bridge to make the 2 routers "talk". I've tried hooking up the dsl cable (as they are modems) in both to try but when one disconnects as the other connects. I don't really know about the configurations to be done of all the DCHP or RIP and NAT forwarding stuffs. (i'm just writing what i saw) In short i want the second router to act as a wifi repeater but i don't see any repeater option and i also do not want to connect them via ethernet. So is it possible to do something? Apart from buying another repeater i don't want to spend anymore i'm done :S

    Read the article

  • Video on Hyperion Tax Provision

    - by Lia Nowodworska - Oracle
    ( in via Jan) EPM Information Development has asked us to remind you about the new video available for Hyperion Tax Provision. You can view it on the OracleEPMWebcasts YouTube channel here: http://bit.ly/1jxLlCy An information rich 4:40 minutes of your time.  So please take a look. The video gives a brief overview of the main features of  Hyperion Tax Provision. You will learn ... That Tax Provision and reporting System builds on the Hyperion Financial Close Reporting Platform That much of the Tax Provision flow process is similar to the Financial Close Process and that the modules have been aligned to work together very closely. That HTP enables you to integrate Book- and Tax Reporting on a common platform. That It uses the technology of HFM, ties in with SmartView and can be used with Hyperion Financial Reporting. That the native integration between the Financial System and the Tax System creates transparency for the Tax Departments and removes bottlenecks in the tax process. More technical information can be found here: Oracle Hyperion Tax Provision Data Sheet Oracle Hyperion Tax Provision White Paper Oracle Hyperion Tax Provision Documentation If you have another 45 minutes to spare and want to get into greater detail, then you can check out the recording of an Advisor Webcast that we did earlier last year: You can find this via KM Doc Oracle Business Analytics Advisor Webcast Schedule and Archive Recordings (Doc ID 1456233.1) -> Select the Tab "Archived 2013" and it is the third from the top: "Oracle Hyperion Tax Provision - Features and Overview with Demo" If you have questions towards that Advisor Webcast, you may participate in the Community Discussion about it. (layout and post: Torben, authorized: Lia)

    Read the article

  • Cannot enable Additional drivers for Broadcom in 12.10

    - by Compt
    tl;dr version: Additional Drivers does not work for enabling a driver in Ubuntu 12.10, however it worked in 12.04. I upgraded to Ubuntu 12.10 from 12.04 via a live USB. I did a fresh install to avoid any conflicts. From 12.04 I know that my wireless card has proprietary drivers that could be installed via Additional Drivers. I did eventually find additional drivers in the software sources menu, however when I attempt to switch to Broadcom 802.11 Linux STA wireless driver source from bcmwl-kernel-source, it not only does not work, it in fact disables my wireless completely. The only way to revert this is to restart, and revert back to "do not use the device". Then upon another restart, my wireless will function once again. I was curious if anyone else had this issue, and if there was a fix for it as of yet. I looked on launchpad and it may be a potential bug, but I am unsure. Any help would be greatly appreciated (and sorry for that wall of text). Until then, I'll continue to use my wireless as it is (or revert to 12.04), but I do notice a slower connection without it enabled.

    Read the article

  • Setup IIS 7 as FTP Server that is connectable outside of my local network

    - by Usta
    I was able to setup an FTP site that I was able to access via ftp://127.0.0.1/ or my local(static) ip. To do this I followed these instructions (with the exception that I did not bind to 127.0.0.1 as suggested) http://learn.iis.net/page.aspx/301/creating-a-new-ftp-site-in-iis-7/ I have created a firewall exception for port 20 and 21, and setup port-forwarding on my wireless router. But I can only access the site via local-host, and I need to have a friend have read access to it. So how do I enable remote access to it? (I'd rather not purchase a domain-name) My setup: IIS 7.5 Windows 7 Professional Wireless Network Norton Internet Security 2012 An Internal Static IP Address

    Read the article

  • How to implement smart card authentication with a .NET Fat client?

    - by John Nevermore
    I know very little about smart card authentication in general so please point out or correct me if anything below doesn't make sense. Lets say i have: A Certificate Authority "X"-s smart card (non-exportable private key) Drivers for that smart card written in C A smart card reader CA-s authentication OCSP web service A requirement to implement user authentication in a .NET fat client application via a smart card, that was given out by the CA "X". I tried searching info on the web but no prevail. What would the steps be ? My first thought was: Set up a web service, that would allow saving of (for example) scores of a ping pong game for each user. Each time someone tries to submit a score via the client application, he can only do so by inserting the smart card into the reader. Then the public key is read from the smart card by native c calls through .NET and sent to my custom web service, which in return uses the CA-s authentication OCSP web service to prove the validity of the public key/public certificate (?). If the public key is okay and valid, encrypt a random sequence of bytes with the public key and send it to the client application. If the client application sends back the correctly decrypted random sequence of bytes along with the score of the ping pong game, then the score is saved in the database for the given user. My question is, is this the correct way to do it ? What else should i know about smart card authentication ?

    Read the article

  • How to integrate Thunderbird with SpamAssassin running on the server?

    - by haimg
    I'm trying to integrate SpamAssassin running on a server with Thunderbird. Basically I need to be able to select several emails in Thunderbird and send them back to SpamAssassin for training, either as spam or ham. I tried several approaches: Tried "Report Spam" plugin, which is able to send message back to server either as an email attachment or via HTTP post. However, the plugin is rather buggy... Does not support sending several messages at once, "report as ham" is not working at all, etc. Wanted to make a custom button that will copy selected messages to a separate IMAP folder (I could create "LearnAsSpam" and "LearnAsHam" folders in IMAP that will get processed automatically on server), but don't even know how to approach this in Thunderbird, don't want to learn Thunderbird extention authoring... Server-side, I'm prepared to do some custom programming or integration needed (can receive a message via HTTP / SMTP / whatever), my stumbling block is Thunderbird... So, how can I send emails from Thunderbird back to SpamAssassin running on email server for Bayesian training, with as few keystrokes as possible?

    Read the article

  • What the best way to recover from when your RAID H/W incorrectly thinks a disk is missing

    - by Software Monkey
    I have a Windows 7 system with an MSI motherboard (running the latest AMD BIOS) and two of my four disks (not the system boot disk) configured via the Mobo as RAID-1. After a normal system restart today, the RAID BIOS reports that one of the two drives has been disconnected or has failed. It's not really failed; via recovery tools I can verify that if I take the BIOS out of RAID mode. But I can find no way to re-add the second hard disk to the array and rebuild via the BIOS - the only option seems be to delete the array and recreate it, but I've done that once before and it blows away the disk. It's done this once before, however on a subsequent reboot after double-checking the drive cabling (but not changing anything) and it boot up fine. So I think the mobo RAID is a little bit flaky. At this point I would like to remove the RAID drivers, change to AHCI mode and switch over to using a Windows 7 dynamic mirror disk. But the RAID drivers seem somehow deeply bound into the Windows startup - I can't find anything like the good ol' safe-mode in Windows 7. If I boot from the Win 7 install disk in ACHI mode I can use recovery tools to log in to the Windows 7 installation, so the boot drive it seems fine with ACHI mode. Additionally, I can see all my other disks, run chkdsk on them and they seem to be fine. If I try to boot from the HDD in AHCI mode, it just reboots part way through, presumably because the RAID drivers load and conflict with the BIOS being set to AHCI. So: How do I strip the RAID drivers from my Win 7 installation? If I delete the RAID logical disk, will it really delete partitioning information, or is that just a poorly worded message when it says the data on the disk will be deleted? If I disconnect the 2 disks in a RAID array, then delete the logical disk array, and then reconnect and reboot still in RAID mode, will the disks simply revert to RAID single-disks like my other 2 and then maybe I can leave windows with RAID drivers by operate the disks as singles with 2 of them in a Windows dynamic disk mirrored setup? Does Windows 7 have anything like the Windows XP Repair Install, where it will reinstall the O/S binaries from CD, but leave apps and setup alone. I am really hoping I don't have to do a complete reinstall of Windows 7 - the last one, when I upgraded from XP, took me two days to get everything set up and installed.

    Read the article

  • Operating System not Found after installing Ubuntu in a Sony Vaio

    - by diego8arock
    I just bought a Sony Vaio SVS15115FLB that came with Windows 7, after enjoying the PC graphic power for a little, I decided it was time to install Ubuntu 12.04. First, I inserted a USB stick, reboot, press F11 but a message saying that no OS was found on the USB, so then I used a live CD. It booted fine and I installed Ubuntu, then when it was time to restart the PC, it didn't boot to GRUB but it went straight to Windows and it began an startup error and was looking for a solution, after it was done, it restarted and then it booted again to Windows and to the same start up error solution thing. I freaked out, so I booted again the Ubuntu live CD, and installed Ubuntu over everything, after it installed I rebooted and then a message appeared saying Operating system Not Found, and I have no idea why. So I Googled again and found this post on Boot Partition, I did everything exactly on that post, but it didn't work (by the way, this was the message): The boot files of [Ubuntu 12.04 LTS] are far from the start of the disk. Your BIOS may not detect them. You may want to retry after creating a /boot partition (EXT4, >200MB, start of the disk). This can be performed via tools such as gParted. Then select this partition via the [Separate /boot partition:] option of [Boot Repair]. It appeared the first time, then I did it all again and then it was gone. I rebooted and nothing, the same Operating System not found message appeared. So I decided to create a partition for Windows, hoping for something, but the message still appears. I really have no idea what to do, but there is something odd, if I insert the USB stick containing Ubuntu 11.10, the message that says that there is no OS in the PendDrive flashes for a fraction of a second and the boot straight to Ubuntu 12.04 without problems (and booted to Windows when I installed it, ignoring Ubuntu), right now I'm using it like that, but its pretty annoying. Can anyone advise me how to fix this? I'm no expert on this kind of things (boot, GRUB, recovery and stuff like that).

    Read the article

  • Webserver insists on opening "blog1.php" instead of "index.php"

    - by pepoluan
    I'm at my wits' end. I have just ripped out a website and in the process of rebuilding everything. Previously, the 'home page' of the website is a blog, with the address "www.mydomain.com/blog1.php". After exporting everything, I deleted the whole directory, and -- based on request -- immediately create a blog/ directory. The idea is to get the blog back up as soon as possible, and temporarily redirect people accessing www.mydomain.com to the blog. Accessing the blog via http://www.mydomain.com/blog/ works. So I put in an index.php file containing a (temporary) redirect to the blog's address. The problem: The server insists on opening blog1.php instead of index.php. Even after we deleted all the files (including .htaccess). And even putting in a new .htaccess file with the single line of DirectoryIndex index.php doesn't work. The server stubbornly wants blog1.php. Now, the server is actually a webhosting, so I have no actual access to it. I have to do my work via cPanel. Currently, I work around this issue by creating blog1.php; but I really want to know why the server does not revert to opening index.php. Did I perhaps miss some important settings in the byzantine cPanel menu page?

    Read the article

  • Bizarre SSH Problem - It won't even start

    - by thallium85
    I recently got Ubuntu 12.04 Precise, got it up and running with some MediaWiki software, static IP on the box and router and was able to access the main page even from a cell phone. Everything seemed great... Then I wanted to finally get rid of the monitor and keyboard and login remotely via SSH. I installed openssh-server, let everything point to port 22 for a test run and installed putty on my Windows XP machine. I got a connection refused. Went back and started checking the Ubuntu install itself... (I'm under root from this point on) $ sudo -s $ service ssh status ssh stop/waiting $ service ssh start ssh start/running, process 2212 $ service ssh status ssh stop/waiting Apparently ssh has stopped or is waiting for something.... $ ssh localhost ssh: connect to host localhost port 22: Connection refused I can't even connect to myself... I checked ufw (firewall) to see if port 22 is doing alright... $ sudo ufw status Status: active To Action From 22 ALLOW Anywhere 22/tcp ALLOW Anywhere 22 ALLOW Anywhere (v6) 22/tcp ALLOW Anywhere (v6) sshd_config shows only Port 22 Is ssh not using the right IP address at all? I just don't get what I did wrong here. When this is up and running I will def change the port number, but for now, I don't want to mess with the default install too much until a test run with putty is successful. Edit: Here are my sshd_config file and my ssh_config file. The command /usr/sbin/sshd -p 22 -D -d -e returns: /etc/ssh/sshd_config line 159: Subsystem 'sftp' already defined. Edit: @phoibus moving the sshd_config file and reinstalling did the trick! service ssh status the above command shows that ssh is now running and I am now able to log in from my windows xp computer remotely via putty. Thanks so much! I can now use my monitor for other things!

    Read the article

  • Create an AWS AMI for Ubuntu with GUI which automatically launches web browser

    - by Rory MacDonald
    I've got an ubuntu AMI setup with ubuntu desktop installed and Chrome installed and set to boot on load (via the startup programmes menu within the ubuntu desktop) I've created an image of this AMI, but any time I launch a new instance running this, the Ubuntu GUI doesn't seem to load, until I SSH into the machine, enable VNC and then connect via Chicken VNC to the machine. At that point, the desktop appears to load + starts the browser. I really need the machine to boot and the browser to load without having to VNC into the machine.. Any help would be appreciated.

    Read the article

  • Automated testing tool development challenges (for embedded software)

    - by Karthi prime
    My boss want to come up with the proposal for the following tool: An IDE: Able to build, compile, debug, via JTAG programming for the micro-controller. A Test Suite, reads the code in the IDE, auto generates the test cases, and it gives the in-target unit testing results(which is done by controlling code execution in the micro-controller via IDE). A no-overhead code coverage tool which interacts with the test suite and IDE. My work is to obtain the high level architecture of this tool, so as to proceed further. My current knowledge: There are tool-chains available from the chip manufacturer for the micro-controllers which can be utilized along with an open-source IDE like Eclipse, and along with an open-source burner, a complete IDE for a micro-controller can be done. Test cases can be auto-generated by reading the source file through the process of parsing, scripting, based on keywords. Test suite must be able to command the IDE to control, through breakpoints, and read the register contents from the microcontroller - This enables the in-target unit testing. An no-overhead code coverage should be done by no-overhead code instrumentation so as to execute those in the resource constraint environment of the micro-controller. I have the following questions: Any advice on the validity of my understanding? What are the challenges I will have during the development? What are the helpful open-source tools regarding this? What is the development time for this software? Thanks

    Read the article

  • Laptop connecting to Wifi but not to internet

    - by eddard stark
    My friends laptop is able to connect to the wifi router . Typing 192.168.1.1 in the browser shows the login page for the router . But he cannot connect to the internet. This is true on both windows and linux (dual booting setup) . There are 3 other laptops connecting to the internet via wifi just fine and his was fine too until this happened all of a sudden . I tried doing a tracert from windows to an external ip . The first hop to the modem is fine but then the packets seem to be getting dropped . If his wifi adapter is damaged how is it connecting to the modem via wifi . I havent asked a question here before but this is really weird . If anyone needs any more information I shall post it here.

    Read the article

  • Working with documents and SharePoint - Best practices

    - by KunaalKapoor
    Follow these simple guidelines to make collaboration using SharePoint easier:1. File Name:While it is allowed to use spaces in your filename (and maybe it seems even logical to do so), don’t use them if your file will end up (or is born on) SharePoint. When you use the “download a copy” functionality, SharePoint will replace the spaces with an “_”. This might (will) result in inconsistency when you upload the “same” file again, since SharePoint will see this as a different file (since the filename is different). I recommend using a filename with Capitalization style naming guideline. For instance: the document “Overall governance model.docx” would be named “OverallGovernanceModel.docx”Use the TITLE field in the office applications to give your document a title (and subtitle and keywords, .) The title column can be used in a view in a library. You can get to the document properties by clicking on 'Office Button/Prepare/Properties'. (Office 2007). This is metadata that is stored with the document, and will remain in the document (even if you exchange this document via e-mail, via an external hard drive). The filename cannot be longer than 128 characters. (and that is IMHO far beyond reasonable) You cannot use any of these characters: ” # % & * : < > ? \ / { | } ~ 2. Versioning:SharePoint has a built-in versioning system. You can work with major (published) versions, and minor (draft) versions. Of each of these two document types, you can store a numbers of versions that are kept. Watch out, each version is saved, not only the delta between 2 versions, and this counts to your Site Collection Quota. (Example: you have a Word document with a size of 2 MB. When you keep 5 Drafts this will result in storing (and consuming) 10 MB.So, don’t call your document “NewUserAccountProcessDRAFTv1.docx”, but “NewUserAccountProcess.docx” and use versioning setting in your library.You can enable views on your library to display the version number.You can enable the version number to be displayed in a Word document.3. Use MetadataUse metadata to assign other properties to documents, so it can be easily identified, sorted- or grouped by.

    Read the article

  • SQL server 2000 reporting bad values to ASP.Net Application

    - by Ben
    I have an instance of SQL server 2000 (8.0.2039) with a rather simple table. We recently had users complain about an application I wrote returning bad values for some of the dates in the databse. When I query the table directly via Server Management Studio, it will return the correct values, however the identical queries from my application report the wrong values, but only for a couple of dates. I have been over the code, and it is solid. If the error was in the code, all of the dates reported should be wrong. I have also run the code on an identical test database, and everything is reported properly. I believe the problem may lie in the sql instance itself, which is why I am posting in Server Fault. My question is, has anyone heard of a database reporting bad (incorrect) date values when queried via web application? It should be noted that this particular server was once manually rebuilt after having a cluster clean run on it.

    Read the article

  • Why would video stutter on HDMI but not on DVI?

    - by CorvT
    I've got a system running Ubuntu 12.04 with an i3 2120T CPU/GPU. When I play video through mplayer, I notice when I'm hooked up to a screen via HDMI there is a small stutter (1-2 frames) every few seconds. I don't see this happening when I connect via DVI on the same screen. Resolution and refresh rate are same for both HDMI and DVI, so I'm not sure where else the problem could be coming from. I've also tried two different screens, and different cables. I see the stutter with either HDMI-HDMI cables, or DVI-HDMI cable with DVI from the PC and HDMI into the screen. I don't see the stutter with DVI-DVI cables, or when I use HDMI-DVI cables with HDMI from the PC to DVI into the screen. I've also tried using an AMD 5XXX series card with the open source radeon driver, and saw the same problem. I then tried an nVidia GeForce 210 card with the closed source driver, and the stutter went away. To me this smells like a driver/mesa/glx issue (since the problem went away with the nvidia card/driver), but I have no idea how to track this down.

    Read the article

  • Lost WiFi after 12.10 upgrade

    - by Steven Guillory
    I received my new Dell Vostro 2420 last week, and just got around to upgrading from 11.10 to 12.10. Unfortunately, like many others (after researching the issue), I no longer have WiFi. I have tried every sudo command given that worked for others, and still can't get my wireless to function. I am new to Linux, so any and all help is appreciated. Thanks in advance! Edit: I can connect via ethernet, just not via wifi. As a matter of fact, when I use Fn + F2 to turn on wifi, only my bluetooth comes on. lspci 00:00.0 Host bridge: Intel Corporation 2nd Generation Core Processor Family DRAM Controller (rev 09) 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 00:16.0 Communication controller: Intel Corporation Panther Point MEI Controller #1 (rev 04) 00:1a.0 USB controller: Intel Corporation Panther Point USB Enhanced Host Controller #2 (rev 04) 00:1b.0 Audio device: Intel Corporation Panther Point High Definition Audio Controller (rev 04) 00:1c.0 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 1 (rev c4) 00:1c.3 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 4 (rev c4) 00:1c.5 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 6 (rev c4) 00:1d.0 USB controller: Intel Corporation Panther Point USB Enhanced Host Controller #1 (rev 04) 00:1f.0 ISA bridge: Intel Corporation Panther Point LPC Controller (rev 04) 00:1f.2 SATA controller: Intel Corporation Panther Point 6 port SATA Controller [AHCI mode] (rev 04) 00:1f.3 SMBus: Intel Corporation Panther Point SMBus Controller (rev 04) 07:00.0 Network controller: Broadcom Corporation Device 4365 (rev 01) 09:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 07) This is what I am getting... dpkg: error: --install needs at least one package archive file argument Type dpkg --help for help about installing and deinstalling packages [*]; Use dselect or aptitude for user-friendly package management; Type dpkg -Dhelp for a list of dpkg debug flag values; Type dpkg --force-help for a list of forcing options; Type dpkg-deb --help for help about manipulating *.deb files; Options marked [*] produce a lot of output - pipe it through less or more !

    Read the article

  • Scaling a video processing application on EC2?

    - by Stpn
    I am approaching the need to scale a video-processign application that runs on EC2. So far the setup is one machine: Backbonejs frontend Rails 3.2 Postgresql Resque + S3 for storage The flow of the app is as follows: 1) Request from frontend. Upload a video. 2) Storing video 3) Quering external APIs. 4) Processing / encoding videos. 5) Post to frontend. I can separate the backend and frontend without any problems, but when it comes to distributing the backend between several servers I am a bit puzzled. I can probably come up with a temporary solution (like just duplicating apps making several instances), but since I don't really have expertise in backend system administration, there can be some fundamental mistakes.. Also I would rather have something that is scalable. I wonder if anyone can give some feedback on the following plan: A) Frontend machine. Just frontend, talks to backend via REST Api of sorts. B) Backend server (BS), main database. Gets request from 1), posts to 2) saves uploads to 3) C) S3 storage. D) Server for quering APIs. Basically just a Resque workers, that post info back to 2) E) Server for video encoding. Processes videos uploaded on 3) and uploads them back. So I will have: A)frontend \ \ B)MAIN_APP/DB ----- C)S3 Storage (Files) / \ / / \ / D)ExternalAPI_queries E)Video_Processing (redundant DB) (redundant DB) All this will supposedly talk to each other via HTTP requests. My reason for this is that Video Processing part is really the most resource-intensive and I would just run barebones application that accepts requests and starts processing them. Questions: 1) In this setup I will have the main database at B) and all other servers will communicate with it via HTTP requests (and store duplicates of databases also I guess..for safety reasons). Is it the right approach or should I have 1 database that everyone connects to (how then?) 2) Is it a good idea to separate API queries from Video Processing part? Logically they are very close (processing is determined by the result of API queries), but resource-wise Video Processing is waaay more intensive. 3) what should I use to distribute calls between backend apps based on load?

    Read the article

  • Seeking for a better solution to restrict access in GRUB2 menu

    - by LiveWireBT
    I just read that in certain situations you should also protect access to your GRUB2 menu by setting a password and may be refining acces by adding --unrestricted or --users as arguments to menuentries und submenus. I read the corresponding pages in the Ubuntu Community Documentation and the Arch Wiki. So, I created /etc/grub.d/01_security, stored usernames and passwords in there, made the file executable and ran update-grub. This is working as intended, every action in the menu prompts for username and password, but I also want to modify the automatically generated entries to either restrict them to certain users (via --users) or make them available for everyone, but not editable by everyone (via --unrestricted). I was able to find the proper lines in 10_linux and edit them accordingly, however I'd love to see an easier solution. Perhaps an option like GRUB_DISABLE_RECOVERY="true" or GRUB_DISABLE_OS_PROBER=true in /etc/default/grub for easy (re)configuration (for linux and os-prober generated entries). Here's a diff from my 13.10 installation: $ diff /etc/grub.d/10_linux /etc/grub.d/10_linux_bak 123c123 < echo "menuentry '$(echo "$title" | grub_quote)' ${CLASS} --unrestriced \$menuentry_id_option 'gnulinux-$version-$type-$boot_device_id' {" | sed "s/^$ --- > echo "menuentry '$(echo "$title" | grub_quote)' ${CLASS} \$menuentry_id_option 'gnulinux-$version-$type-$boot_device_id' {" | sed "s/^/$submenu_inde$ 125c125 < echo "menuentry '$(echo "$os" | grub_quote)' ${CLASS} --unrestricted \$menuentry_id_option 'gnulinux-simple-$boot_device_id' {" | sed "s/^/$submenu_$ --- > echo "menuentry '$(echo "$os" | grub_quote)' ${CLASS} \$menuentry_id_option 'gnulinux-simple-$boot_device_id' {" | sed "s/^/$submenu_indentation/" 323c323 < echo "submenu --unrestricted '$(gettext_printf "Advanced options for %s" "${OS}" | grub_quote)' \$menuentry_id_option 'gnulinux-advanced-$boot_device_$ --- > echo "submenu '$(gettext_printf "Advanced options for %s" "${OS}" | grub_quote)' \$menuentry_id_option 'gnulinux-advanced-$boot_device_id' {" tl;dr: I'd love the see a simple solution for GRUB2 entries that cannot be modified without a password or are limited to certain users. (Yes, GRUB_DISABLE_RECOVERY="true" is active.)

    Read the article

  • webmail suite recommendation

    - by hoball
    Hello, I have serveral emails in a few domains (email@domain1, email@domain2, email@domain3). Currently they are on an owned email server and I am collecting emails via IMAP protocol (i would not like to use POP..) in Thunderbird. I have a few partners and I want to allow them to access the same email address. Here is what I desired: All users can open All the inboxes via IMAP @ Thunderbird (with proper configuration) at the same time, there are a webmail system, every user can login their account (userA, userB, userC), and they will see all inboxes (email@domain1, email@domain2, email@domain3) Would you recommend any suite that fits my needs? Either (a system to be installed on my server) or (a remote service where I need to config MX records) will do. Thank you.

    Read the article

  • Import Java Trusted Certificate to JRE

    - by Zalastax
    I need to install a certificate from a Java app to a lot of people. I want to use a one click program or batch file to import it as a Trusted Certificate(in Control Panel-Security-Certificate). Then they won't need to press always allow first time they use the application. I have extracted the needed certificate as both a .csr and as a .cer (the .csr via Control Panel and the .cer via keytool). Now I need to get one of them back without any clicking in menus. I don't really understand the documentation of importing .cer with keytool and would like an example. Or are there an easier way than using keytool?

    Read the article

  • Are .NET versions backwards compatible?

    - by Boden
    Over the years various versions of .NET have been deployed to my client machines via WSUS. Now it seems that on many machines these installations have hosed eachother, and certain .NET security updates are failing. I verified that I can run the .NET cleanup tool to get rid of all the .NET installations on a client, and I can then push out .NET 3.5 via WSUS. This seems to have solved the problems I'm having on the machine I tried it on. So the question is: if I've got .NET 3.5, is there any reason to also have previous versions installed?

    Read the article

  • Managing products on a an ecommerce site [closed]

    - by John
    I've had a site that sells widgets for many years. I do not inventory my widgets, but the cost of adding them to the site and makings sure the site is current is becoming cost prohibitive. Here are the facts: I sell a single class of widget. I have about 50,000 widgets on my site. I have about 100 vendors that create and dropship the products when they get an order from me via email. Each vendor carries from 50 to 5000 types of widgets. Vendors all have websites with images and descriptions of their products. Each widget is produced in limited supply and usually sell out in 1-5 years. Prices of the widget often go up, sometimes more than 50% before they sell out. My vendors aren't very tech sophisticated. They have websites with their products, but most can't supply an api or database dump. Their websites usually display retail prices to the public, but I login or refer to a price list (usually excel) for wholesale prices. As it stands now, I hire local people to add and describe each widget to our website. It usually takes a person 4 minutes to add a widget to the site. This doesn't include moving to a new vendor. I feel like the upload/edit process is as good as it can get via a form/website. The problem is that it is getting very expensive to upload and keep the widget inventory current. I often get orders for something after it's sold out from the vendor or the price is wrong. This seems like it would be a problem in many industries. Can anyone suggest the cheapest way to upload inventory and ensure prices are current from my vendors? I'm assuming it will involve outsourcing, but I would like ideas on how to setup the compensation model.

    Read the article

< Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >