Search Results

Search found 57907 results on 2317 pages for 'database access'.

Page 262/2317 | < Previous Page | 258 259 260 261 262 263 264 265 266 267 268 269  | Next Page >

  • Access VirtualBox-ed server from behind the router

    - by migajek
    I'm having the following configuration: Windows 7 hosting VirtualBox and it's guest: Ubuntu The machine (physical) which runs VirtualBox is behind the router and has the address of 192.168.0.110 VirtualBox runs the Bridged network, and the IP of VirtualBox-ed Ubuntu (eth0) is 192.168.0.200 Host (Win7) is running HTTP service on port 80, while guest (Ubuntu) is running it's service on port 9000 I can access both services from inside the network by typing the ip_address:port and this works fine. Both ports are forwarded on the router to their's respective IPs: 80 -> 192.168.0.110:80 9000 -> 192.168.0.200:9000 Unfortunately, accessing the router's external IP doesn't work as expected. While external_ip:80 works correctly, external_ip:9000 - doesn't I believe the problem is VBox - related, since the same network is running also other physical machine with Ubuntu and http service on 8000 and this one is forwarded correctly.

    Read the article

  • Block IPs if they access a resource

    - by Victor Oliva
    I own a server that it's costantly being attacked by scripts (that try to access to phpMyAdmin's setup file's and stuff like this). I've heard that many people get this kinds of attacks, but I'm starting to worry since they are getting more common (last month I got 2 attacks, and on november 7th there are 3 attempts already (1st, 4th and 6th of nov). I'm not really concerned about it, since I don't have any database. All the info i have on that server is absolutely public, but I'm worried about that attacking-rate increase. So I thought I could -temporarily- block the IPs that come from those attackers, or something that could make my server ignore requests that ask for phpMyAdmin, pma, xamp, etc. Is there something like that? my server is Linux+Apache+Php

    Read the article

  • Should a primary key be immutable?

    - by Vincent Malgrat
    A recent question on stackoverflow provoked a discussion about the immutability of primary keys. I had thought that it was a kind of rule that primary keys should be immutable. If there is a chance that some day a primary key would be updated, I thought you should use a surrogate key. However it is not in the SQL standard and some RDBMS' "cascade update" feature allows a primary key to change. So my question is: is it still a bad practice to have a primary key that may change ? What are the cons, if any, of having a mutable primary key ?

    Read the article

  • Setting a wireless access point on Ubuntu server 11.10

    - by Solignis
    I am trying to setup a wifi access point with my Ubuntu server. I have managed to get my phone to connect the wireless and now it get a DHCP lease. Though it still cannot ping out or get pinged by anything on my network. I am prety sure my problem is iptables, but I not sure what would be wrong. Here is what my rules look like. (The ones pertaining to the bridge interface) # Allow traffic to / from wireless bridge interface iptables -A INPUT -i br0 -j ACCEPT iptables -A OUTPUT -o br0 -j ACCEPT I am guessing my rules are a little lean, the bridge exists on the same subnet as everything else on my network, I am using a 10.0.0.0/24 subnet. EDIT Oh yeah I should mention also, when I do a ping test, I get Destination Host Unreachable as the error.

    Read the article

  • VMware Infrastructure Web Access 2.0.0 stuck at "Loading"

    - by Gruber
    We have a Ubuntu 11 server running VMware virtual machines. We manage it using VMware Infrastructure Web Access 2.0.0. My colleague is able to use it successfully with Internet Explorer 9. However, I am stuck with an empty login page that says "Loading" in the title when trying to connect. It happens in all browsers (IE9, Firefox, Chrome, Opera). My colleague also gets stuck at "Loading" if he tries another browser. How can I resolve this problem?

    Read the article

  • choosing the right RAID level for PostgresQL database

    - by Sergey
    Hi, I got an disk array appliance of 8 disks 1T each (UltraStor RS8IP4). It will be used solely by PostgresQL database and I am trying to choose the best RAID level for it. The most priority is for read performance since we operate large data sets (tables, indexes) and we do lots of searches/scans. With the old disks that we have now the most slowdowns happen on SELECTs. Fault tolerance is less important, it can be 1 or 2 disks. Space is the least important factor. Even 1T will be enough. Which RAID level would you recommend in this situation. The current options are 60, 50 and 10, but probably other options can be even better.

    Read the article

  • Speaking - 24 Hours of PASS, Summit Preview Edition

    - by AllenMWhite
    There's so much to learn to be effective with SQL Server, and you have an opportunity to immerse yourselves in 24 hours of free technical training this week from PASS, via the 24 Hours of PASS event. I'll be presenting an introductory session on PowerShell called PowerShell 101 for the SQL Server DBA . Here's the abstract: The more you have to manage, the more likely you'll want to automate your processes. PowerShell is the scripting language that will make you truly effective at managing lots of...(read more)

    Read the article

  • Implementing new required feature after software release

    - by TiagoBrenck
    Fake Scenario There is a software that was released 1 year ago. The software is to map and register all kind of animals on our planet. When the software was released, the client only needed to know the scientific name of the animal, a flag if it is in risk of extinction and a scale of dangerous(that is a fake software and specification, I don't want to discuss this here). There are already 100.000 animals records saved on DB. New Feature One year later, the client wants a new feature. It is really important to him to know the animals classes, and this is a required field. So he asks me to put a field to input the animal class, and this field is required. Or maybe where this animal was discovered. Problem I have already 100.000 recorded animals without a class or where it was discovered, but I need to insert a new column to storage this information and this column can't be null. I don't have a default value for this situation (there isn't a default animal class or where it was discovered). I don't want to keep the requirement rule only on my software, my DB must have this requirement too(I like to keep business rules on DB too). What are the alternatives to solve this situation? I am on a situation that this new feature cannot be previewed or reviewed for the existing records. The time already passed and I can't go back on time to get it

    Read the article

  • Restrict access to SSH for one specific user

    - by j0nes
    I am looking for a way to secure my servers with the following setup: I have a server where I can log in via SSH. The main account there (named "foo") is secured by a keybased login with password. I have another user account (named "bar") that I use to log in via cronjobs running on other servers - this one also has keybased login, but without password. Now I want to limit access to this machine for the "bar" account. The account should only be accessible via known IPs. However, the "foo" account should not be affected by this, this one should basically be accessible from any IP. How can I manage this? Or is there a simpler solution to everything?

    Read the article

  • Server refusing access from every host except itself

    - by mezamashiman
    I have media content on a hosted server that I want to be accessed by another domain. In the configuration file, even if I "Allow from all," all hosts except itself will fetch the hosting company's generic landing page, which puzzles me. I test it with curl, with the command: curl -H "Host: anything.com" http://mydomain.com and it just shows the hosting company's page. If I do: curl -H "Host: mydomain.com" http://mydomain.com it will show my content. How do I allow other hosts to access my content? I thought it would work with "Allow" in .htaccess, but it doesn't.

    Read the article

  • GParted tells me my partition has 1.30 GiB used space but I cannot access its contents

    - by reprogrammer
    I've a ext4 partition (/dev/sda7) for my Linux. And, another (/dev/sda5) for keeping my data. When I installing Ubuntu 10.04 LTS, I set the mount point of /dev/sda5 to "/" and that of /dev/sda5 to "/data". GParted tells me that 1.30 GiB out of 70.12 GiB of /dev/sda5 has been used up. But, the mounted directory "/data" is empty. So, it looks like that my data is there but I cannot access it. Besides, when I set the mount point, I didn't check the "format" box. So, it shouldn't have been formatted. How can I check whether the partition has been formatted? How can I recover my files?

    Read the article

  • CSC folder data access AND roaming profiles issues (Vista with Server 2003, then 2008)

    - by Alex Jones
    I'm a junior sysadmin for an IT contractor that helps small, local government agencies, like little towns and the like. One of our clients, a public library with ~ 50 staff users, was recently migrated from Server 2003 Standard to Server 2008 R2 Standard in a very short timeframe; our senior employee, the only network engineer, had suddenly put in his two weeks notice, so management pushed him to do this project before quitting. A bit hasty on management's part? Perhaps. Could we do anything about that? Nope. Do I have to fix this all by myself? Pretty much. The network is set up like this: a) 50ish staff workstations, all running Vista Business SP2. All staff use MS Outlook, which uses RPC-over-HTTPS ("Outlook Anywhere") for cached Exchange access to an offsite location. b) One new (virtualized) Server 2008 R2 Standard instance, running atop a Server 2008 R2 host via Hyper-V. The VM is the domain's DC, and also the site's one and only file server. Let's call that VM "NEWBOX". c) One old physical Server 2003 Standard server, running the same roles. Let's call it "OLDBOX". It's still on the network and accessible, but it's been demoted, and its shares have been disabled. No data has been deleted. c) Gigabit Ethernet everywhere. The organization's only has one domain, and it did not change during the migration. d) Most users were set up for a combo of redirected folders + offline files, but some older employees who had been with the organization a long time are still on roaming profiles. To sum up: the servers in question handle user accounts and files, nothing else (eg, no TS, no mail, no IIS, etc.) I have two major problems I'm hoping you can help me with: 1) Even though all domain users have had their redirected folders moved to the new server, and loggin in to their workstations and testing confirms that the Documents/Music/Whatever folders point to the new paths, it appears some users (not laptops or anything either!) had been working offline from OLDBOX for a long time, and nobody realized it. Here's the ugly implication: a bunch of their data now lives only in their CSC folders, because they can't access the share on OLDBOX and sync with it finally. How do I get this data out of those CSC folders, and onto NEWBOX? 2) What's the best way to migrate roaming profile users to non-roaming ones, without losing vital data like documents, any lingering PSTs, etc? Things I've thought about trying: For problem 1: a) Reenable the documents share on OLDBOX, force an Offline Files sync for ALL domain users, then copy OLDBOX's share's data to the equivalent share on NEWBOX. Reinitialize the Offline Files cache for every user. With this: How do I safely force a domain-wide Offline Files sync? Could I lose data by reenabling the share on OLDBOX and forcing the sync? Afterwards, how can I reinitialize the Offline Files cache for every user, without doing it manually, workstation by workstation? b) Determine which users have unsynced changes to OLDBOX (again, how?), search each user's CSC folder domain-wide via workstation admin shares, and grab the unsynched data. Reinitialize the Offline Files cache for every user. With this: How can I detect which users have unsynched changes with a script? How can I search each user's CSC folder, when the ownership and permissions set for CSC folders are so restrictive? Again, afterwards, how can I reinitialize the Offline Files cache for every user, without doing it manually, workstation by workstation? c) Manually visit each workstation, copy the contents of the CSC folder, and manually copy that data onto NEWBOX. Reinitialize the Offline Files cache for every user. With this: Again, how do I 'break into' the CSC folder and get to its data? As an experiment, I took one workstation's HD offsite, imaged it for safety, and then tried the following with one of our shop PCs, after attaching the drive: grant myself full control of the folder (failed), grant myself ownership of the folder (failed), run chkdsk on the whole drive to make sure nothing's messed up (all OK), try to take full control of the entire drive (failed), try to take ownership of the entire drive (failed) MS KB articles and Googling around suggests there's a utility called CSCCMD that's meant for this exact scenario...but it looks like it's available for XP, not Vista, no? Again, afterwards, how can I reinitialize the Offline Files cache for every user, without doing it manually, workstation by workstation? For problem 2: a) Figure out which users are on roaming profiles, and where their profiles 'live' on the server. Create new folders for them in the redirected folders repository, migrate existing data, and disable the roaming. With this: Finding out who's roaming isn't hard. But what's the best way to disable the roaming itself? In AD Users and Computers, or on each user's workstation? Doing it centrally on the server seems more efficient; that said, all of the KB research I've done turns up articles on how to go from local to roaming, not the other way around, so I don't have good documentation on this. In closing: we have good backups of NEWBOX and OLDBOX, but not of the workstations themselves, so anything drastic on the client side would need imaging and testing for safety. Thanks for reading along this far! Hopefully you can help me dig us out of this mess.

    Read the article

  • How do I access my public DNS on Amazon's EC2

    - by Spencer Carnage
    I'm working with Amazon's EC2 for the first time. I went through all of the steps at http://docs.amazonwebservices.com/AWSEC2/latest/GettingStartedGuide/ and now I'm trying to access the public DNS through a web browser. I get nothing. I don't even know if I'm supposed to get anything, really. I just want to see something that indicates that my instance is accessible from the web for development's sake. I'm completely new at this and I can't find a simple answer to this anywhere to save my life. Any help would be MUCH appreciated. Thanks!

    Read the article

  • Lingering database-connections from Feng Office

    - by Bobby
    I've installed Feng Office on our main server which is working perfectly so far. Unfortunately it seems like there's a problem with the connection to the MySQL-Database. While the connection itself works fine, it's the reuse/pooling of connections which seems to be bugged. There are lingering/sleeping connections to the server from Feng Office which won't close and don't get reused after some time (120 seconds). Of course those lingering processes/connections are piling up pretty fast. I've found a thread at the forums about this behavior, but the suggested fix is already applied (by default). I'm sure this is just a configuration issue, but I'm a little clue less because Feng is besides a MediaWiki, a DokuWiki and homebrewed PHP applications the only one with this issue. The setup is a Microsoft Windows 2003 Server with MySQL 5.0.26 and Apache 2.2. Where can I start looking for clues why this is happening and how do I get rid of lingering MySQL-Connections?

    Read the article

  • Tomcat access logs - are failed requests included?

    - by Maxim Eliseev
    We have a RESTful web service (Java, hosted in Tomcat on Ubuntu on Amazon EC2). From time to time it fails (not every week). When it fails, Java CPU consumption goes to 100% and it takes all available memory. It does not finish by itself. I have to restart the server. There is nothing suspicious in Tomcat access logs. I guess one of our users could submit a very "heavy" request which brought the server down. Is it possible this request is not in Tomcat logs since it never finished?

    Read the article

  • Master Data Management

    - by Logicalj
    I am looking for a very flexible, easy to integrate and dynamic application with as many features as possible for Master Data Management. As Master Data Management is used to Manage Operational Data, Analytical Data and Master Data so, I want guidance about "What is exactly expected from Master Data Management and What are the Basic and Challenging Scenarios to be covered or resolved in Master Data Management". Please guide me with all the possible aspects of Master Data Management like Data Cleansing, Data Management and Start Data Analyzing, etc.

    Read the article

  • uTorrent error: Access is denied

    - by Ben Franchuk
    When I booted my computer today and opened up uTorrent it went on, as usual, to check the torrents that I had been running before. On one it returned an "Access is denied" error after it had been checked. I have absolutely no idea why this is happening as I was downloading the same torrent just the night before, and additionally because I have other torrents going in the same downloads folder and they do not have any such errors. It may be worth noting that on said defective torrent, there appear to be no seeds and no connected peers (of the 33 available) and three of the trackers appear to be inactive as well (those being Peer Exchange, Local Peer Discovery and DHT). I have rebooted a few times in an attempt to fix this problem, but to no success. I also tried a few times force rechecking / force starting but neither of those worked either. edit I booted today and it was working for whatever reason. thanks for all of the help anyways guys!

    Read the article

  • implementing dynamic query handler on historical data

    - by user2390183
    EDIT : Refined question to focus on the core issue Context: I have historical data about property (house) sales collected from various sources in a centralized/cloud data source (assume info collection is handled by a third party) Planning to develop an application to query and retrieve data from this centralized data source Example Queries: Simple : for given XYZ post code, what is average house price for 3 bed room house? Complex: What is estimated price for an house at "DD,Some Street,XYZ Post Code" (worked out from average values of historic data filtered by various characteristics of the house: house post code, no of bed rooms, total area, and other deeper insights like house building type, year of built, features)? In addition to average price, the application should support other property info ** maximum, or minimum price..etc and trend (graph) on a selected property attribute over a period of time**. Hence, the queries should not enforce the search based on a primary key or few fixed fields In other words, queries can be What is the change in 3 Bed Room house price (irrespective of location) over last 30 days? What kind of properties we can get for X price (irrespective of location or house type) The challenge I have is identifying the domain (BI/ Data Analytical or DB Design or DB Query Interface or DW related or something else) this problem (dynamic query on historic data) belong to, so that I can do further exploration My findings so far I could be wrong on the following, so please correct me if you think so I briefly read about BI/Data Analytics - I think it is heavy weight solution for my problem and has scalability issues. DB Design - As I understand RDBMS works well if you know Data model at design time. I am expecting attributes about property or other entity (user) that am going to bring in, would evolve quickly. hence maintenance would be an issue. As I am going to have multiple users executing query at same time, performance would be a bottleneck Other options like Graph DB (http://www.tinkerpop.com/) seems to be bit complex (they are good. but using those tools meant for generic purpose, make me think like assembly programming to solve my problem ) BigData related solution are to analyse data from multiple unrelated domains So, Any suggestion on the space this problem fit in ? (Especially if you have design/implementation experience of back-end for property listing or similar portals)

    Read the article

  • Active Directory replication failing with Access is Denied

    - by Justin Love
    I recently discovered that Active Directory replication started failing about a month ago. If I attempt to Replicate Now from the failing domain controller, I receive The following error occurred during the attempt to synchronize the domain controllers: Access is denied. It is between two servers at a remote site. One is Windows 2003 and the other is Windows 2000; the Windows 2000 machines is experiencing the errors. The domain is older OUR_DOMAIN style. Attempts so far: I disabled Kerberos service on the Windows 2000 server and restarted RPC and RPC locater services have expected settings HKEY_Local_Machine\Software\Microsoft\Rpc\ClientProtocols missing ncacn_nb_tcp on Windows 20003 server (added) Portqry reports okay Firewall disabled netdom resetpwd (and reboot) on Windows 2000 server.

    Read the article

  • SharePoint Server 2007 and HTML Forms - How to control access rights

    - by Anarkie
    I'm working with Hosted SharePoint 2007 with Forms Server. I need to allow client access to submit HTML forms designed in Infopath. Problem is, I need to make sure the clients don't see the library. There is sensitive data that will be on these forms. I also need to have a repeated library that is based on the Internal Admin records and requirements. Outside of making a seperate library per customer, does anyone have any suggestions? My Goal: 1: Customers enter their requests through a link or provided page 2: Internally address the requests and perform required arrangements, add billing and payment fields 3: Have SharePoint metrics, reports, etc... based on the provided intormation and status. Thanks in Advance!!

    Read the article

  • Wamp virtualhost with supporting of remote access

    - by Farid
    To cut the long story short, I've setup a Wamp server with local virtual host for domain like sample.dev, now I've bind my static IP and port 80 to my Apache and asked the client to make some changes in his hosts file and add x.x.x.x sample.dev , I've also configured my httpd virtual host like this : <VirtualHost *:80> ServerAlias sample.dev DocumentRoot 'webroot_directory' </VirtualHost> Client can reach to my web server using the direct access by ip address, but when he tries using the sample domain looks like he gets in to some infinite loop. The firewall is off too. What would be the problem?! Thanks.

    Read the article

  • How to access a site in IIS with no DNS mapping

    - by CiccioMiami
    In my IIS 7.5 hosted in a Windows Server 2008 R2 I have several websites with no DNS address assigned. Let's take for instance the site (as named in IIS) with site name mySite. I have for this site the standard binding with no host name. Suppose that my server IP address is, for instance, 101.22.23.01. Therefore it seems logic to me that in order to access the website, I should place in the address bar of my browser: [IP_address]/[sitename] in this case: 101.22.23.01/mySite but it does not work. Shall I specify something else in the bindings?

    Read the article

  • Windows remote shutdown: access denied

    - by gregseth
    I have 3 "client" computers, on which the mentioned user is administrator: CPU1: Win Vista 32-bit -- User: Domain\User1 -- IP: 192.168.42.1 CPU2: Win 7 64-bit -- User: localhost\User2 -- IP: 192.168.42.2 CPU3: Win 7 64-bit -- User: Domain\User3 -- IP: 192.168.42.3 And a "target" computer (the one that I want to shutdown from the three others): TGT: Win 7 64-bit -- User: localhost\User4 -- IP: 192.168.42.21 I'm trying to shutdown TGT with the following command: shutdown /s /m \\192.168.42.21 It's working from CPU1 (meaning TGT shuts down), but from CPU2 and CPU3 I get the following message: Access denied. (5) What am I to understand? What should I do to get it working form all of my computers.

    Read the article

  • how to manage credentials/access to multiple ssh servers

    - by geoaxis
    I would like to make a script which can maintain multiple servers via SSH. I want to control the authentication/authorization in such a manner that authentication is done by gateway and any other access is routed through this ssh server to internal services without any further authentication/authorization requirements. So if a user A can log into server_1 for example. He can then ssh to server_2 without any other authentication and do what ever he is allowed to do on server_2 (like shut down mysql, upgrade it and restart it. This could be done via some remote shell script). The problem that I am trying to solve is to come up with a deployment script for a JavaEE system which involves databases and tomcat instances. They need to be shutdown and re-spawned. The requirement is to have a deployment script which has minimal human interaction as possible for both developers and operation.

    Read the article

  • HP Media Smart remote access

    - by Coov
    I just purchased this box for home backups for my pc's and mac's. Everything works great accept for the remote access part. I can RD into the machine locally but I can't get to it from outside of my network. I've enabled port forwarding on my router but it doesn't seem to matter. I checked with Qwest and they don't block these ports so I'm at a loss. I do have Vonage in front of my router but I've taken it out and it didn't make a difference. I suspect I've made an error with my router setup. I'm a programmer and I'm playing in the world of the unknown here. I'm lost. Any suggestions?

    Read the article

< Previous Page | 258 259 260 261 262 263 264 265 266 267 268 269  | Next Page >