Search Results

Search found 6568 results on 263 pages for 'shared'.

Page 12/263 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • Unable to find reference to std library math function inside library

    - by Alex Marshall
    Hello, I've got several programs that use shared libraries. Those shared libraries in turn use various standard C libraries. ie Program A and Program B both use Shared Library S. Shared Library S uses std C math. I want to be able to statically link Shared Library S against the standard library, and then statically link Programs A and B against S so that I don't have to be dragging around the library files, because these programs are going to be running on an embedded system running BusyBox 0.61. However, when I try to statically link the programs against Shared Library S, I get an error message from GCC stating : ../lib/libgainscalecalc.a(gainscalecalc.): In function 'float2gs': [path to my C file].c:73: undefined reference to 'log' Can somebody please help me out ? The make commands I'm using are below : CFLAGS += -Wall -g -W INCFLAGS = -I$(CROSS_INCLUDE)/usr/include LIBFLAGS += -L$(CROSS_LIB)/usr/lib -lm gainscalecalc_static.o: gainscalecalc.c $(CC) $(CFLAGS) -c $< -I. $(INCFLAGS) -o $@ gainscalecalc_dynamic.o: gainscalecalc.c $(CC) $(CFLAGS) -fPIC -c $< -o $@ all: staticlib dynamiclib static_driver dynamic_driver clean: $(RM) *.o *.a *.so *~ driver core $(OBJDIR) static_driver: driver.c staticlib $(CC) $(CFLAGS) -static driver.c $(INCFLAGS) $(LIBFLAGS) -I. -L. -lgainscalecalc -o $@ dynamic_driver: driver.c dynamiclib $(CC) $(CFLAGS) driver.c -o $@ -L. -lgainscalecalc staticlib: gainscalecalc_static.o $(AR) $(ARFLAGS) libgainscalecalc.a gainscalecalc_static.o $(RANLIB) libgainscalecalc.a chmod 777 libgainscalecalc.a dynamiclib: gainscalecalc_dynamic.o $(CC) -shared -o libgainscalecalc.so gainscalecalc_dynamic.o chmod 777 libgainscalecalc.so Edit: Linking against the shared libraries compiles fine, I just haven't tested them out yet

    Read the article

  • shared hosting: ssl domain receives all server's ssl requests, google gets it wrong

    - by pixeline
    This website is hosted on a shared server. I've recently had my hosting provider secure our website using SSL (https://domain.com instead of http://domain.com). Ever since then, all https requests sent to the server are redirected to my website. https://otherdomain.com leads to a warning, then, if you continue, to my website. Ok, my fault, i should have known SSL means 1 IP, otherwise, this thing can happen. But... Google Search results for my website's target keywords is now displaying these websites above my own, even though they have nothing even remotely related to the target keywords! already done: provide canonical url in the html page. told the problem to the server manager, who tells me it's normal but he'll look for a solution. This was one week ago, no answer since. I have no idea why Google is providing these https urls: i thought someone would have to submit them, or have them inside an html page in order for Google to actually index dummy https domains, but i see no reason why someone would do that. Any suggestion on how to solve this situation? Golive is in one week and SEO looks really bad because of that.

    Read the article

  • How should I show shared resources during a Shared Resource game in the Galaxy Editor?

    - by Mag Roader
    One of my favorite ways to play the original StarCraft was in a "Team" game. In this game type, multiple players on the same "team" would share control, resources, supply, and even the same starting location. It was like playing as 1 player, only 2 humans were controlling it. It was a lot of fun. I want to do something very similar in StarCraft 2, but I need to create a custom map in the Galaxy Editor to do it. I found the editor can quite easily emulate this behavior. There is a Trigger action "Set Alliance for Player Group" to "...treat each other as Ally With Shared Vision, Control, And Spending." To use this, I create units for only 1 of the players, and then set all players to be allied with each other in this way. All the other players get no units and no resources. This makes it so 1 player is the actual owner of all the units and everyone else is tagging along with full control. This nearly works! The problem is that if I am not the actual owning player, I can't actually see how many minerals/gas/supply the team has. This makes it pretty difficult to build stuff. What would be the best way to display to the other players how many Minerals/Gas/Supply the team has?

    Read the article

  • Setting up a shared media drive

    - by Sam Brightman
    I want to have a shared media drive be transparently usable to all users, whilst also sticking to FHS and Ubuntu standards. The former takes priority if necessary. I currently mount it at /media/Stuff but /media is supposed to be for external media, I believe. The main issue is setting permissions so that access to read and write to the drive can be granted to multiple users working within the same directories. InstallingANewHardDrive seems both slightly confused and not what I want. It claims that this sets ownership for the top-level directory (despite the recursion flag): sudo chown -R USERNAME:USERNAME /media/mynewdrive And that this will let multiple users create files and sub-directories but only delete their own: sudo chgrp plugdev /media/mynewdrive sudo chmod g+w /media/mynewdrive sudo chmod +t /media/mynewdrive However, the group writeable bit does not seem to get inherited, which is troublesome for keeping things organised (prevents creation inside sub-folders originally made by another user). The sticky bit is probably also unwanted for the same reason, although currently it seems that one userA (perhaps the owner of the mount-point?) can delete the userB's files, but not vice-versa. This is fine, as long as userB can create files inside the directory of userA. So: What is the correct mount point? Is plugdev the correct group? Most importantly, how to set up permissions to maintain an organised media drive? I do not want to be running cron jobs to set permissions regularly!

    Read the article

  • accessing files on a shared folder via IIS

    - by Darkcat Studios
    Im not sure if this suits stackexchange, serverfault or here... so i'll go with here for a start. I'm having issues setting up a network share to be accessed by IIS, all I need to do is read/write files on the Other server. We have 2 servers set up (Both 2008 R2 & IIS 7.5), one is the WEB server, which is externally accessible and NOT part of the domain. We also have an Intranet server which has no internet connectivity and is part of the domain. These 2 servers can talk to each other happily, I have the SQL server on the WEB server shared across to the intranet server so that the web content is editable from the intranet. I can share a folder on the web server (say, wwwroot/Images/) and connect to it from the intranet server, even have it as a mapped drive (but i know thats not going to work for IIS to access it), So there seems not to be a connectivity issue. I can also set up a Virtual folder in IIS on the Intranet server - this is where it gets annoying - I cant connect using pass-through authentication because there is no suitable user on the web server (which is not on the domain). If i set up a user on the web server, eg Intranet_USR, and give it appropriate rights to the folder, files and share, i can connect, but only view folder contents in IIS, not read, although that user has read privileges!! Any help much appreciated!

    Read the article

  • Static / Shared Helper Functions vs Built-In Methods

    - by Nathan
    This is a simple question but a design consideration that I often run across in my day to day development work. Lets say that you have a class that represents some kinds of collection. Public Class ModifiedCustomerOrders Public Property Orders as List(Of ModifiedOrders) End Class Within this class you do all kinds of important work, such as combining many different information sources and, eventually, build the Modified Customer Orders. Now, you have different processes that consume this class, each of which needs a slightly different slice of the ModifiedCustomerOrders items. To enable this, you want to add filtering functionality. How do you go about this? Do you: Add Filtering calls to the ModifiedCustomerOrders class so that you can say: MyOrdersClass.RemoveCanceledOrders() Create a Static / Shared "tooling" class that allows you to call: OrdersFilters.RemoveCanceledOrders(MyOrders) Create an extension method to accomplish the same feat as #2 but with less typing: MyOrders.RemoveCanceledOrders() Create a "Service" method that handles the getting of Orders as appropriate to the calling function, while using one of the previous approaches "under the hood". OrdersService.GetOrdersForProcessA() Others? I tend to prefer the tooling / extension method approaches as they make testing a little bit simpler. Although I dependency inject all my sourcing data into the ModifiedCustomerOrders, having it as part of the class makes it a little bit more complicated to test. Typically, I choose to use extension methods where I am doing parameterless transformations / filters. As they get more complex, I will move it into a static class instead. Thoughts on this approach? How would you approach it?

    Read the article

  • How To Sync Your Shared Google Calendars with Your iPhone

    - by Justin Garrison
      Smartphones are essential to our daily lives. They help us stay connected and keep us organized. But when it comes to calendar syncing and Gmail there are limitations. Here’s how you can sync your shared calendars and contacts from Gmail. If you use Gmail you probably know about the ability to create and share calendars with others. They help keep groups organized and even let you subscribe to public events. When it comes to getting that information on your smartphone there are some trade offs if you are on a non-Android phone. Android phones will sync your email, contacts, and all of your calendars by just singing into your Gmail account. If you have an iPhone however, you will miss out on contact syncing if you set up your account as a Gmail account. HTG Explains: Do You Really Need to Defrag Your PC? Use Amazon’s Barcode Scanner to Easily Buy Anything from Your Phone How To Migrate Windows 7 to a Solid State Drive

    Read the article

  • Google results show .info domain instead of .com

    - by user481913
    I am on shared hosting currently and i registered this account with a .info domain as the main domain.... say MyDomain.info . However, the site runs from MyDomain.com . This is a cpanel based shared hosting account. MyDomain.info has nothing hosted at all... i.e no content files... MyDomain.com is setup as an Add On Domain and run from /public_html/MyDomain under MyDomain.info The problem is that when i type MyDomain as the keyword for search in Google , it shows result(s)for Mydomain.info although this is not the intended site and has no content hosted on itself. I tried to solve the issue by issuing a 301 permanent redirect from MyDomain.info to MyDomain.com, however Google keeps on displaying results as mydomain.info as the main site even after 1 month of the redirect. I want google to index MyDomain.com as the main site and remove MyDomain.info from the results. Also is this harmful from the seo point of view? How can i improve the seo if it is?

    Read the article

  • Strategy for avoiding duplicate object ids for data shared across devices using iCloud

    - by rmaddy
    I have a data intensive iOS app that is not using CoreData nor does it support iCloud synching (yet). All of my objects are created with unique keys. I use a simple long long initialized with the current time. Then as I need a new key I increment the value by 1. This has all worked well for a few years with the app running isolated on a single device. Now I want to add support for automatic data sync across devices using iCloud. As my app is written, there is the possibility that two objects created on two different devices could end up with the same key. I need to avoid this possibility. I'm looking for ideas for solving this issue. I have a few requirements that the solution must meet: 1) The key needs to remain a single integral data type. Converting all existing keys to a compound key or to a string or other type would affect the entire code base and likely result in more bugs than it's worth. 2) The solution can't depend on an Internet connection. A user must be able to run the app and add data even with no Internet connection. The data should still resolve properly later when the data syncs through iCloud once a connection is available. I'll accept one exception to this rule. If no other option is available, I may be open to requiring an Internet connection the first time the app's data is initialized. One idea I have been toying around with in my head is logically splitting the integer key into two parts. The high 4 or 5 bits could be used as some sort of device id while the rest represents the actual key. The fuzzy part is figuring out how to come up with non-conflicting device ids that fit in a few bits. This should be viable since I don't need to deal will millions of devices. I just need to deal with the few devices that would be shared by a given iCloud account. I'm open to suggestions. Thanks.

    Read the article

  • Debugging into a shared library source from consuming app, using QTCreator

    - by morpheous
    I am using QTCreator (1.3.1) on Ubuntu Karmic. I have built two projects: a shared library an application that links to the shared library I am debugging the application, and need to step into the implementation (i.e. the source) of one of the functions exported by the shared library. Does anyone know how to setup the QTCreator to allow me to step into the source of a shared library?

    Read the article

  • How do I use waf to build a shared library?

    - by James Morris
    I want to build a shared library using waf as it looks much easier and less cluttered than GNU autotools. I actually have several questions so far related to the wscript I've started to write: VERSION='0.0.1' APPNAME='libmylib' srcdir = '.' blddir = 'build' def set_options(opt): opt.tool_options('compiler_cc') pass def configure(conf): conf.check_tool('compiler_cc') conf.env.append_value('CCFLAGS', '-std=gnu99 -Wall -pedantic -ggdb') def build(bld): bld.new_task_gen( features = 'cc cshlib', source = '*.c', target='libmylib') The line containing source = '*.c' does not work. Must I specify each and every .c file instead of using a wildcard? How can I enable a debug build for example (currently the wscript is using the debug builds CFLAGS, but I want to make this optional for the end user). It is planned for the library sources to be within a sub directory, and programs that use the lib each in their own sub directories.

    Read the article

  • How do I use a shared library (in this case JsonCpp) in my C++ program on Linux?

    - by Not Joe Bloggs
    I'm a new-ish C++ programmer, and I'm doing my first program on my own using C++. I decided I would like to use JSON to store some of the data I'm going to be using, and I've found a library to handle JSON, JsonCpp. I've installed the library using my Linux system's package manager, and in my C++ code, I've used in my source code file #include <json> and compiled it using g++ and it's -ljson and -L/usr/lib options (libjson.so is located in /usr/lib). However, the first usage of Json::Value, an object provided by the library, gives a compilation error of "Json has not declared". I'm sure my mistake is something simple, so could someone explain what I'm doing wrong? None of the books I had mention how to use shared libraries, so I've had to google to find this much.

    Read the article

  • how to set breakpoint on function in a shared library which has not been loaded in gdb

    - by pierr
    Hi, I have a shared library libtest.so which will be loaded into the the main program using dlopen. Function test() reside in libtest.so and will be called in the main program through dlsym. Is there any way I could set up a break point on test? Please note that the main programm has not been linked to libtest.so during linking time. Otherwise , I should be able to set the break point although it is a pending action. In my case, when I do b test, gdb will tell me Function "test" not defined.

    Read the article

  • PHP: Using browscap.ini on shared host. - ini_set() failing

    - by GreybeardTheUnready
    I'm trying to use get_browser() , unfortunately my page is on a shared host, and I have no access to php.ini. I have downloaded the latest version of browscap.ini and placed in my document root. I have then added the following:- if (!ini_set('browscap', '/home/private stuff/browscap.ini')) { echo "Failed to set browscap"; } else { echo "browscap = [" . ini_get('browscap') . "]"; } exit(); But this fails, (nb: the echo statement for the failed condition always shows [] - even if I didn;t have the browscap.ini file the setting should still show up in the ini_get.... shouldn't it?) I have looked at the previous questions on this and they don't seem to help, any ideas?

    Read the article

  • How shared hostings, domain names and DNS work together?

    - by vtortola
    Hi, I 've this little doubt but I couldn't find information about it, probably because I'm not searching the correct thing. When a browser ask for "www.mydomain.com", the DNS server returns an IP Address, then the browser go there... but what does happen then? I mean, that IP address could be a shared hosting that contains hundreds of web pages and domains, so how does it knows where it have to go? Is something that the web server does? is it something that I could implement in a web application? I mean, for example I have a web application that contains accounts, and each account has a default web page. You could access that page passing the account namne, for example "www.mydomain.com/myaccount", but now I want to register "www.myaccount.com" and then it will get the "www.mydomain.com/myaccount" content. Is it possible? Kind regards.

    Read the article

  • How do you detach an array of strings from shared memory? C

    - by Tim
    I have: int array_id; char* records[10]; // get the shared segment if ((array_id = shmget(IPC_PRIVATE, 1, 0666)) == -1) { perror("Array Creating"); } // attach records[0] = (char*) shmat(array_id, (void*)0, 0); if ((int) *records == -1) { perror("Array Attachment"); } which works fine, but when i try and detach i get an "invalid argument" error. // detach int error; if( (error = shmdt((void*) records[0])) == -1) { perror(array detachment); } any ideas? thank you

    Read the article

  • Proper set up shared folders for users

    - by user221486
    First I would like to say thanks for helping, and I have huge problem with proper set up permission for shared folders. I have Windows 7 x64 ent. - name: backupfb - added to domain with shared folder on drive e: (e:\backup) 50 clients/laptops with TSM Tivoli fastback for workstations who save files on shared folder And I need to configure proper permission for my shared folders that only owner of folder can access to their folders. Folder structure is: e:\backup <- shared as a "backup" folder \\backupfb\backup\ e:\backup\BackupAdmin <-- directory is used by the Tivoli Storage Manager FastBack for Workstations client to download revisions and configurations. Nodes require read-only access to these directories e:\backup\RealTimeBackup <-- enable client accounts to create directories that are only accessible by the account that created them. As a result, the directory that contains data for a node is not created until that node connects to the server. So permission should look like that (take from instructions): Inheritable permissions from object`s parents are DISABLE Permission entries: \\backupfb\backup\BackupAdmin Allow Users Read, Execute This folder, subfolders, and files Traverse Folder / Execute Allow List Folder / Read Data Allow Read Attributes Allow Read Extended Attributes Allow Delete subfolders and files Allow Delete Allow Read Permission’s Allow Allow Administrators Full Control This folder, subfolders, and files Both folders have enabled option "apply these permissions to objects and/or containers within this container only" Here everything works fine \\backupfb\backup\RealTimeBackup <<-- Allow Administrators Full Control This folder, subfolders, and files Allow CREATOR OWNER Full Control This folder, subfolders, and files (from domain) Allow Users Special This folder only Traverse Folder / Execute Allow List Folder / Read Data Allow Read Attributes Allow Read Extended Attributes Allow Create Files / Write Data Allow Create Folders / Append Data Allow Delete subfolders and files Allow Read Permission’s Allow Allow OWNER RIGHTS* Full Control This folder, subfolders, and files Here I have huge problem with CREATOR OWNER Im able to set FULL CONTROL but I can only apply "Subfolders and files only". When I change props. to "This folder, subfolders and files" and save its change to "Subfolders and files only" So I try use icacls to set up permissions @echo off takeown /F E:\backup\ /R /A for /D %%i IN (E:\backup\RealTimeBackup*) DO icacls E:\backup\RealTimeBackup\%%~nxi /grant:r cloud\%%~nxi:F /T /C pause but after that user are able to create just one folder in \backupfb\backup\RealTimeBackup\userfolder but problem is with subfolders In log i have: FBW5022E Unable to access the specified file Explanation: The file specified is unable to be accessed. Possibly spelled incorrectly, or bad path, or permissions. User response: Ensure the user has the proper permissions for the file and directories involved andthat the file and directory exist Any idea ?? pls help ;-) thanks

    Read the article

  • ArchBeat Facebook Friday: Top 10 Shared Links - May 30- June 5, 2014

    - by OTN ArchBeat
    The list below is comprised of the Top 10 most popular articles, blog posts, videos, and other content shared over the last seven days with the more than 5,100 people fans of the OTN ArchBeat Facebook Page. What is REST? | Maarten Smeets "Most Middleware developers will encounter RESTful services," says Oracle SOA / BPM / Java integration specialist Maarten Smeets. "It is good to understand what they are, what they should be and how they work." His extensive post will help you achieve that understanding. Integrating with Fusion Applications using SOAP web services and REST APIs | Arvind Srinivasamoorth This article, part one of Arvind Srinivasamoorth's two-part series on Integrating with Fusion Applications using SOAP web services and REST APIs, shows you how to identify the Fusion Applications SOAP web service to be invoked. Oracle Technology Network | Architect Community Have you visited the OTN Solution Architect homepage lately? I've just updated it with information about the big OTN Virtual Tech Summit on July 9, plus the latest OTN tech articles, and a fresh list of community videos and podcasts. Check it out! Starting and Stopping a Java EE Environment when using Oracle WebLogic | Rene van Wijk Oracle ACE Director and Oracle Fusion Middleware specialist Rene van Wijk explores ways to simplify the life-cycle management of a Java EE environment through the use of scripts developed with WebLogic Scripting Tool and Linux Bash. Application Composer Series: Where and When to use Groovy | Richard Bingham Richard Bingham describes his post as "more of a reference than an article." The post is comprised of a table that highlights where you can add your own custom logic via Groovy code and when you might use the various features. Kscope 2014: HFM Metadata Diagnostics | Eric Erikson Oracle Certified Hyperion Financial Management Specialist Eric Erikson will present three sessions at ODTUG Kscope 2014, June 22-26 in Seattle. Why should you care? Watch the video. Tuning Asynchronous Web Services in Fusion Applications | Jian Liang This article, the fourth in solution architect Jian Liang's five-part series on Fusion Applications and asynchronous Web Services, shows you how to conduct performance tuning of the asynchronous web services in relation to Fusion Applications. IDM FA Integration Flows | Thiago Leoncio Fusion Applications uses the Oracle Identity Management for its identity store and policy store by default. This article by solution architect Thiago Leoncio explains how user and role flows work from different points of view, using key IDM products for each flow in detail. GoldenGate and Oracle Data Integrator - A Perfect Match in 12c... Part 1: Getting Started | Michael Rainey Michael Rainey has already written extensively about about integration between Oracle Data Integrator and GoldenGate -- but he's not done. "With the release of the 12c versions of ODI and GoldenGate last October, and a soon-to-be-updated reference architecture, it’s time to write a few posts on the subject again, " he says. Here's the first of those posts. Video: Kscope 2014 Preview: Tim Tow on Essbase Java API and ODTUG Community Oracle ACE Director and ODTUG board member Tim Tow talks about his Kscope 2014 sessions focused on the Essbase Java API in this short video interview.

    Read the article

  • Access denied error while mounting a shared folder?

    - by SSH
    I am a linux newbie and I have a very basic question. I have three machines - machineA 10.108.24.132 machineB 10.108.24.133 machineC 10.108.24.134 and all those machines have Ubuntu 12.04 installed in it and I have root access to all those three machines. Now I am supposed to do below things in my above machines - Create mount point /opt/exhibitor/conf Mount the directory in all servers. sudo mount <NFS-SERVER>:/opt/exhibitor/conf /opt/exhibitor/conf/ I have already created /opt/exhibitor/conf directory in all those three machines as mentioned above. Now I am trying to create a Mount Point on all those three machines. So I followed the below process - Install NFS support files and NFS kernel server in all the above three machines $ sudo apt-get install nfs-common nfs-kernel-server Create the shared directory in all the above three machines $ mkdir /opt/exhibitor/conf/ Edited the /etc/exports and added the entry like this in all the above three machines - # /etc/exports: the access control list for filesystems which may be exported # to NFS clients. See exports(5). # # Example for NFSv2 and NFSv3: # /srv/homes hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check) # # Example for NFSv4: # /srv/nfs4 gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check) # /srv/nfs4/homes gss/krb5i(rw,sync,no_subtree_check) # /opt/exhibitor/conf/ 10.108.24.*(rw) Run exportfs in all the above three machines root@machineA:/# exportfs -rv exportfs: /etc/exports [1]: Neither 'subtree_check' or 'no_subtree_check' specified for export "10.108.24.*:/opt/exhibitor/conf/". Assuming default behaviour ('no_subtree_check'). NOTE: this default has changed since nfs-utils version 1.0.x exporting 10.108.24.*:/opt/exhibitor/conf Now I did showmount on machineA root@machineA:/# showmount -e 10.108.24.132 Export list for 10.108.24.132: /opt/exhibitor/conf 10.108.24.* And also I have started the NFS server like this in all the above three machines - sudo /etc/init.d/nfs-kernel-server start And now when I did this, I am getting an error - root@machineA:/# sudo mount -t nfs 10.108.24.132:/opt/exhibitor/conf /opt/exhibitor/conf/ mount.nfs: access denied by server while mounting 10.108.24.132:/opt/exhibitor/conf I have also tried doing the same thing from machineB and machineC as well and still I get the same error- root@machineB:/# sudo mount -t nfs 10.108.24.132:/opt/exhibitor/conf /opt/exhibitor/conf/ root@machineC:/# sudo mount -t nfs 10.108.24.132:/opt/exhibitor/conf /opt/exhibitor/conf/ Did my /etc/exports file looks good? As I have the same content in all the three machines. And also are there any logs related to NFS which I can see to find any clues? Any idea what wrong I am doing here? UPDATE:- So my etc/exports files would be like this in all the three machines - # /etc/exports: the access control list for filesystems which may be exported # to NFS clients. See exports(5). # # Example for NFSv2 and NFSv3: # /srv/homes hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check) # # Example for NFSv4: # /srv/nfs4 gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check) # /srv/nfs4/homes gss/krb5i(rw,sync,no_subtree_check) # /opt/exhibitor/conf/ 10.108.24.132(rw) /opt/exhibitor/conf/ 10.108.24.133(rw) /opt/exhibitor/conf/ 10.108.24.134(rw) Just a quick check - The IP Address that I am taking for each machine as mentioned above is like this - root@machineB:/# ifconfig eth0 Link encap:Ethernet HWaddr 00:50:56:ad:5b:a7 inet addr:10.108.24.133 Bcast:10.108.27.255 Mask:255.255.252.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:5696812 errors:0 dropped:12462 overruns:0 frame:0 TX packets:5083427 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:7904369145 (7.9 GB) TX bytes:601844910 (601.8 MB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:187144 errors:0 dropped:0 overruns:0 frame:0 TX packets:187144 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:24012302 (24.0 MB) TX bytes:24012302 (24.0 MB) Here the IP Address that I am taking for machineB is 10.108.24.133.

    Read the article

  • Port forwarding + shared connection with Ubuntu

    - by Joey Adams
    Because my wireless router's ethernet ports are defective, I set up a shared wireless connection from my laptop (which has wifi) to my eMac (which does not) via a crossover ethernet cable. The laptop is behind a router as 192.168.1.131, and the eMac is behind the laptop as 10.42.43.1 . The laptop is running Ubuntu 9.10 (Karmic). I achieved the shared connection through NetworkManager Applet. I right-clicked on the network icon at the topright, went to Edit Connections, selected the Wired connection named "Auto eth0", clicked "Edit...", went to the "IPv4 Settings" tab, and selected the Method "Shared to other computers". The eMac can now access the Internet. Now I want to enable port forwarding. There's a game I want to play that needs port 6112 forwarded (both TCP and UDP) in order to host games. I set up the router to enable port forwarding for 192.168.1.131 (the laptop), but port forwarding still isn't available on the eMac. I suppose I need to pretend my laptop is a router and configure port forwarding on it, indicating that incoming connections to the laptop (192.168.1.131) should be forwarded to the eMac on the shared connection (10.42.43.1 ). Thus, packets coming into the router on port 6112 would be redirected to the laptop (by the router), then to the eMac (by the laptop). My question is, how would I do that on Ubuntu (in light of NetworkManager's presence)? Also, if I can't get this to work, does anyone mind hosting a comp stomp? :D

    Read the article

  • Port forwarding + shared connection with Ubuntu

    - by Joey Adams
    Because my wireless router's ethernet ports are defective, I set up a shared wireless connection from my laptop (which has wifi) to my eMac (which does not) via a crossover ethernet cable. The laptop is behind a router as 192.168.1.131, and the eMac is behind the laptop as 10.42.43.1 . The laptop is running Ubuntu 9.10 (Karmic). I achieved the shared connection through NetworkManager Applet. I right-clicked on the network icon at the topright, went to Edit Connections, selected the Wired connection named "Auto eth0", clicked "Edit...", went to the "IPv4 Settings" tab, and selected the Method "Shared to other computers". The eMac can now access the Internet. Now I want to enable port forwarding. There's a game I want to play that needs port 6112 forwarded (both TCP and UDP) in order to host games. I set up the router to enable port forwarding for 192.168.1.131 (the laptop), but port forwarding still isn't available on the eMac. I suppose I need to pretend my laptop is a router and configure port forwarding on it, indicating that incoming connections to the laptop (192.168.1.131) should be forwarded to the eMac on the shared connection (10.42.43.1 ). Thus, packets coming into the router on port 6112 would be redirected to the laptop (by the router), then to the eMac (by the laptop). My question is, how would I do that on Ubuntu (in light of NetworkManager's presence)? Also, if I can't get this to work, does anyone mind hosting a comp stomp? :D

    Read the article

  • Mounting a VirtualBox shared folder on boot with fstab in OpenSuse 11.3

    - by ccook
    I have followed the steps found here, however, the share is not mounted on boot. The share will mount if i run 'mount -a' after booting. Why would the share not mount on boot? 1 - Set up a Virtual Machine and install OpenSUSE 11.2 2 - Create a shared folder on host (HostFolder) 3 - Setup the shared folder in Virtualbox Via the Virtual Machine details or via Devices Shared Folders... 4 - Install dependencies for running the Virtualbox installer You need to install the right development kernelpackage for your machinetype (use 'zypper search -i kernel' to see what's installed) sudo zypper install make gcc kernel-source kernel-hosttype/default-devel 5 - Run the Virtual Machine and go to Devices Guest Additions This mounts an iso image in your OpenSUSE guest. 6 - Open a root terminal and run cd /usr/src/linux make oldconfig && make prepare && make scripts && make dep cp ../linux-obj/$HOSTTYPE/default/Module.symvers . make prepare A commenter on previously mentioned thread says this step is unnecessary but it doesn't work without on my system. I suggest trying step 7 first and returning to step 6 if that fails. * 7 - Run ./VirtualboxLinux yourhosttype .run from the mounted iso image. 8 - Create shared folder in OpenSUSE (GuestFolder) 9 - Test with sudo mount -t vboxsf HostFolder /home/user/GuestFolder It works? Great! Let's set up the system so it automounts for your regular useraccount instead of root-only access. 10 - Add this line to /etc/fstab HostFolder /home/user/GuestFolder vboxsf defaults,uid=1000,gid=1000 0 0 11 - It works for me but if it still doesn't automount after a reboot; sudo mount -a

    Read the article

  • Can't open shared drive after disconnecting vpn

    - by Matt McMinn
    I use a VPN to connect to my office network. On my local network I have another WinXP machine that shares a printer and a few shared folders. While I'm connected to my work VPN, I can access the shared printer and folders on the other machine just fine, and vice versa. Once I disconnect the VPN, I can't access the local machine any more, and the other machine can't access my machine. The network itself seems ok - I can ping the other machine, get to the internet, and get on to a web server shared by the other machine, but I can't get to the shared folders or printer. If I reconnect to the VPN, my access is restored. I'm guessing this is some sort of authentication thing, but I don't know what. Any ideas? Update This problem is bothering me again, so here's an update. Depending on when I first access the WinXP machine, I either have this problem, or the opposite problem. After a reboot, if I (for example) print, then connect to the VPN, I can't access the machine while on the VPN. If after a reboot I connect to the VPN, then print, I can't access the machine off the VPN. In both cases, if I enable/disable the VPN again, I can access the machine again. Thanks

    Read the article

  • Convert Public Folder to Shared Mailbox

    - by Lilienthal
    Due to a change in company policy, all existing Public Folders (PF) have to be phased out in favour of shared mailboxes. Unfortunately, they don't seem to have any procedures or guidelines for this migration and I can't find much online either. I've already migrated one of our public folders so far as a sort of test case. Because we still use Exchange 2003, we can't create real shared mailboxes as we would in 2007 or 2010 (With New-Mailbox -Shared ... in the Exchange Shell). Instead, I simply created a new account on the AD and assigned it a mailbox. I then set the PF's permissions to read-only to keep it in a consistent state and copied the entire folder to a local PST in Outlook 2010, from which the folder was in turn copied to the new mailbox. Permissions and Folder Visible were set for all users and the migration was successful. While this works, the whole procedure feels very hackish to me and not at all efficient. I'd welcome some input on automating or at least streamlining the process. Additionally, we are unsure of what to do with our mail-enabled Public Folders. Several of these are nested under other PFs, some of which are also mail-enabled. Preserving folder structure is a key requirement and this seems impossible at first glance. I've considered creating dummy accounts for all the email addresses from our mail-enabled PFs and then setting up automated rules to forward messages to a subfolder of the new shared mailboxes, but I am not familiar enough with Exchange to know if this is even possible. Further points of concern are the Calendars and Contact lists in our public folders. I suppose I'll be forced to create new mailboxes for every one of these we have as well, then set up share permissions for their Calendar and Contact items, but would be happy to be proven wrong.

    Read the article

  • Is there any good hosting for asp.net and MySQL

    - by HAJJAJ
    HI every one ,I have account with one of the hosting company, and i did my project in asp.net and I used MySQL for the database. the hosting company is not giving me the full privileges to create new user or to create new stored procedure!!! this is what they said for me: Due to the shared nature of our environment we had to make some modifications to your procedure (namely the definer). We also had to review your procedure to determine if it would be compatible with our environment. While your procedures will work (via phpMyAdmin or some other interface), it is unlikely they will be accessible via the Connector/.NET (ADO.NET) that your application is likely using. This is due to a security restriction with how that connector works in shared environments. http://dev.mysql.com/doc/refman/5.0/en/connector-net-programming-stored.html "Note When you call a stored procedure, the command object makes an additional SELECT call to determine the parameters of the stored procedure. You must ensure that the user calling the procedure has the SELECT privilege on the mysql.proc table to enable them to verify the parameters. Failure to do this will result in an error when calling the procedure." Unfortunately, giving read privileges on the mysql.proc table will give you access to the data of our other customers and that is not an acceptable risk. If your application can only work using stored procedures, then MSSQL will probably be the better option for your site. I apologize for the inconvenience and the wait to have this ticket completed. So is there any good hosting that any body already used it to publish his asp.net and mysql project ??? this is one of my stored procedure and i think it's sample and it will not harm any other uses!!: -- -------------------------------------------------------------------------------- -- Routine DDL -- Note: comments before and after the routine body will not be stored by the server -- -------------------------------------------------------------------------------- DELIMITER $$ CREATE DEFINER=`root`@`localhost` PROCEDURE `SpcategoriesRead`( IN PaRactioncode VARCHAR(5), IN PaRCatID BIGINT, IN PaRSearchText TEXT ) BEGIN -- CREATING TEMPORARY TABLE TO SAVE DATA FROM THE ACTIONCODE SELECTS -- DROP TEMPORARY TABLE IF EXISTS TEMP; CREATE temporary table tmp ( CatID BIGINT primary key not null, CatTitle TEXT, CatDescription TEXT, CatTitleAr TEXT, CatDescriptionAr TEXT, PictureID BIGINT, Published BOOLEAN, DisplayOrder BIGINT, CreatedOn DATE ); IF PaRactioncode = 1 THEN -- Retrive all DATA from the database -- INSERT INTO tmp SELECT CatID,CatTitle,CatDescription,CatTitleAr,CatDescriptionAr,PictureID,Published,DisplayOrder,CreatedOn FROM tbcategories; ELSEIF PaRactioncode = 2 THEN -- Retrive all from the database By ID -- INSERT INTO tmp SELECT CatID,CatTitle,CatDescription,CatTitleAr,CatDescriptionAr,PictureID,Published,DisplayOrder,CreatedOn FROM tbcategories WHERE CatID=PaRCatID; ELSEIF PaRactioncode = 3 THEN -- NOSET YET -- INSERT INTO tmp SELECT CatID,CatTitle,CatDescription,CatTitleAr,CatDescriptionAr,PictureID,Published,DisplayOrder,CreatedOn FROM tbcategories WHERE Published=1 ORDER BY DisplayOrder; END IF; IF PaRSearchText IS NOT NULL THEN set PaRSearchText=concat('%', PaRSearchText ,'%'); SELECT CatID,CatTitle,CatDescription,CatTitleAr,CatDescriptionAr,PictureID,Published,DisplayOrder,CreatedOn FROM tmp WHERE Concat(CatTitle, CatDescription, CatTitleAr, CatDescriptionAr) LIKE PaRSearchText; ELSE SELECT CatID,CatTitle,CatDescription,CatTitleAr,CatDescriptionAr,PictureID,Published,DisplayOrder,CreatedOn FROM tmp; END IF; DROP TEMPORARY TABLE IF EXISTS tmp; END

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >