Search Results

Search found 24073 results on 963 pages for 'client profile'.

Page 187/963 | < Previous Page | 183 184 185 186 187 188 189 190 191 192 193 194  | Next Page >

  • Creating subdomain in URL alaising

    - by Jay
    I am creating a social networking site and one of the requirements is to have the subdomain like URL for each user. For example, for the user1 his profile page will be user1.mysitename.com and for the user2 profile page will be user2.mysitename.com. Can it be done using url aliasing? basically user1.mysitename.com should be www.mysitename.com/profile.aspx?username=user1 I will be hosting this in windows 2003 (IIS6), any help is highly appreciated.

    Read the article

  • OSGi bundle imports packages from non-bundle jars: create bundles for them?

    - by John Simmons
    I am new to OSGi, and am using Equinox. I have done several searches and can find no answer to this. The discussion at OSGI - handling 3rd party JARs required by a bundle helps somewhat, but does not fully answer my question. I have obtained a jar file, rabbitmq-client.jar, that is already packaged as an OSGi bundle (with Bundle-Name and other such properties in its MANIFEST.MF), that I would like to install as a bundle. This jar imports packages org.apache.commons.io and org.apache.commons.io.input from commons-io-1.2.jar. The RabbitMQ client 2.7.1 distribution also includes commons-cli-1.1.jar, so I presume that it is required as well. I examined the manifests of these commons jars and found that they do not appear to be packaged as bundles. That is, their manifests have none of the standard bundle properties. My specific question is: if I install rabbitmq-client.jar as a bundle, what is the proper way to get access to the packages that it needs to import from the commons jars? There are only three alternatives that I can think of, without rebuilding rabbitmq-client.jar. The packages from the commons jars are already included in the Equinox global classpath, and rabbitmq-client.jar will get them automatically from there. I must make another bundle with the two commons jars, export the needed packages, and install that bundle in Equinox. I must put these two commons jars in the global classpath when I start Equinox, and they will be available to rabbitmq-client.jar from there. I have read that one normally does not use the global classpath in an OSGi container. I am not clear on whether items from the global classpath are even available when building individual bundle classpaths. However, I note that rabbitmq-client.jar also imports other packages such as javax.net, which I presume come from the global classpath. Or is there some other bundle that exports them? Thanks for any assistance!

    Read the article

  • What is the best way to get a reference to a spring bean in the backend layers?

    - by java_pill
    I have two spring config files and I'm specifying them in my web.xml as in below. web.xml snippet .. <context-param> <param-name>contextConfigLocation</param-name> <param-value>WEB-INF/classes/domain-context.xml WEB-INF/classes/client-ws.xml</param-value> </context-param> <listener> <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class> </listener> .. From my domain object I have to invoke a Web Service Client and in order to get a reference to the Web Service client I do this: ApplicationContext context = new ClassPathXmlApplicationContext("client-ws.xml"); //b'cos I don't want to use WebApplicationContextUtils ProductServiceClient client = (ProductServiceClient) context.getBean("productClient"); .. client.find(prodID); //calls a Web Service .. However, I have concerns that looking up the client-ws.xml file and getting a reference to the ProductServiceClient bean is not efficient. I thought of getting it using WebApplicationContextUtils. However, I don't want my domain objects to have a dependency on the ServletContext (a web/control layer object) because WebApplicationContextUtils depends on ServletContext. What is the best way to get a reference to a spring bean in the backend layers? Thanks!

    Read the article

  • How to retrieve items from a django queryset?

    - by sharataka
    I'm trying to get the video element in a queryset but am having trouble retrieving it. user_channel = Everything.objects.filter(profile = request.user, playlist = 'Channel') print user_channel[0] #returns the first result without error print user_channel[0]['video'] #returns error Models.py: class Everything(models.Model): profile = models.ForeignKey(User) playlist = models.CharField('Playlist', max_length = 2000, null=True, blank=True) platform = models.CharField('Platform', max_length = 2000, null=True, blank=True) video = models.CharField('VideoID', max_length = 2000, null=True, blank=True) video_title = models.CharField('Title of Video', max_length = 2000, null=True, blank=True) def __unicode__(self): return u'%s %s %s %s %s' % (self.profile, self.playlist, self.platform, self.video, self.video_title)

    Read the article

  • Is it still true, to make cross broswer layouts for desktop browsers using table+css is easier then

    - by metal-gear-solid
    My one of web designer friend still making sites with table but he use css very nicely and I also use css nicely but with <div> and i face cross browser problem in layout more than my friend. and i given some reason to my friend about cons of <table>. read my whole discussion with friend? I - you site will be problematic with screen reader My friend - OK, but i never got any call from any client regarding this. I - you will devote more time to make any changes in layout, if changes comes from client My friend - I don't think so, but if it is then show me how can i save time with <div>? I - your sites will not work well with search engine. My friend - it's not true. I've made many site and no problem with any site or client regarding this I - layout is old way, non w3c and non standard way. My friend - what is old and what is new, Who is W3C i don't know, What is standard? Whatever i make works in all browsers, it's enough for me and my client will not pay for standard and W3C guidelines rules I - Your site will not work in mobile browsers My friend - No problem for me, my client don't care about mobile phone I - Your sites are not accessible? My Friend - What do u mean not accessible? Whatever i make works in all browsers. my any client never asked about accessibility I - You will not get more work in future, with table? My friend - OK, no problem when clients will not accept site with table then i will learn about div based layouts in future. My questions? Is it still true, to make cross browser layouts for desktop browsers using table+css is easier then div+css? What is the benefit for developer to use DIV+CSS layout in place of <table> layouts if client would not mind if i use ?

    Read the article

  • How can I pass an object as a parameter in the google app engine RPC flow?

    - by jimmartens
    I'm building a pretty basic app, and one thing I want to do is pass an object as a parameter up through the service - async - impl instead of passing up a million separate parameters. so in async, I do something like this: import shared.Profile; ... public interface ProfileServiceAsync { public void addProfile(Profile inProf, AsyncCallback<Void> async); Now, profile is a class in com. ... .shared and I have the following in my ... .gwt.xml <source path='shared'/> That being said when I try to compile I get this error. [ERROR] Errors in 'file:/D:/projects/eclipse/workspace/.../src/com/.../client/ProfileServiceAsync.java' [ERROR] Line 11: No source code is available for type shared.Profile; did you forget to inherit a required module? Any ideas on this?

    Read the article

  • Is Storing Cookies in a Database Safe?

    - by viatropos
    If I use mechanize, I can, for instance, create a new google analytics profile for a website. I do this by programmatically filling out the login form and storing the cookies in the database. Then, for at least until the cookie expires, I can access my analytics admin panel without having to enter my username and password again. Assuming you can't create a new analytics profile any other way (with OpenAuth or any of that, I don't think it works for actually creating a new Google Analytics profile, the Analytics API is for viewing the data, but I need to create an new analytics profile), is storing the cookie in the database a bad thing? If I do store the cookie in the database, it makes it super easy to programatically login to Google Analytics without the user ever having to go to the browser (maybe the app has functionality that says "user, you can schedule a hook that creates a new anaytics profile for each new domain you create, just enter your credentials once and we'll keep you logged in and safe"). Otherwise I have to keep transferring around emails and passwords which seems worse. So is storing cookies in the database safe?

    Read the article

  • Is there a better way to do updates in LinqToSQL?

    - by Vaccano
    I have a list (that comes to my middleware app from the client) that I need to put in my database. Some items in the list may already be in the db (just need an update). Others are new inserts. This turns out to be much harder than I thought I would be. Here is my code to do that. I am hoping there is a better way: public void InsertClients(List<Client> clients) { var comparer = new LambdaComparer<Client>((x, y) => x.Id == y.Id); // Get a listing of all the ones we will be updating var alreadyInDB = ctx.Clients .Where(client => clients.Contains(client, comparer)); // Update the changes for those already in the db foreach (Client clientDB in alreadyInDB) { var clientDBClosure = clientDB; Client clientParam = clients.Find(x => x.Id == clientDBClosure.Id); clientDB.ArrivalTime = clientParam.ArrivalTime; clientDB.ClientId = clientParam.ClientId; clientDB.ClientName = clientParam.ClientName; clientDB.ClientEventTime = clientParam.ClientEventTime; clientDB.EmployeeCount = clientParam.EmployeeCount; clientDB.ManagerId = clientParam.ManagerId; } // Get a list of all clients that are not in my the database. var notInDB = clients.Where(x => alreadyInDB.Contains(x, comparer) == false); ctx.Clients.InsertAllOnSubmit(notInDB); ctx.SubmitChanges(); } This seems like a lot of work to do a simple update. But maybe I am just spoiled. Anyway, if there is a easier way to do this please let me know. Note: If you are curious the code to the LambdaComparer is here: http://gist.github.com/335780#file_lambda_comparer.cs

    Read the article

  • How to pass a parameter in a Javascript confirm function?

    - by Miles M.
    I have something like that in my code: <?php foreach($clients as $client): ?> <tr class="tableContent"> <td onclick="location.href='<?php echo site_url('clients/edit/'.$client->id ) ?>'"><?php echo $client->id ?></td> <td><a class='Right btn btn-danger' onClick="ConfirmMessage('client', <?php $client->id ?>,'clients')"> <i class="icon-remove-sign icon-white"></i> </a></td> </tr> <?php endforeach ?> that's actually the view. So when the user click on the delete button (thr one with the btn-danger class) I'd like him to confirm his choice with a javascript confirmation box message. You can find that script in the header <script> function ConfirmMessage(type, id, types) { if (confirm("Are you sure you want to delete this ",type," ?")) { // Clic sur OK document.location.href='<?php echo site_url(); ?>',types,'/delete/',id; } } </script> So here is my question: I would like the $type to be replaced by a paramenter (like client, article, post .. ) that I'll pass to the function. And i would like to get the $client-id parameter as well. I'm bad in javascript and as you already have guess, It is obviously not working at all.

    Read the article

  • Single-entry implementation gone wrong

    - by user745434
    I'm doing my first single-entry site and based on the result, I can't see the benefit. I've implemented the following: .htaccess redirects all requests to index.php at the root Url is parsed and each /segment/ is stored as an element in an array First segment indicates which folder to include (e.g. "users" » "/pages/users/index.php"). index.php file of each folder parses the remaining elements in the segments array until array is empty. content.php file of each folder is included if there are no more elements in the segments array, indicating that the destination file is reached Sample File structure ( folders in [] ): [root] index.php [pages] [users] index.php content.php [profile] index.php content.php [edit] index.php content.php [other-page] index.php content.php Request: http://mysite.com/users/profile/ .htaccess redirects request to http://mysite.com/index.php URL is parsed and segments array contains: [1] users, [2] profile index.php maps [1] to "pages/users/index.php", so includes that file pages/users/index.php maps [2] to pages/users/profile/index.php, so includes that file Since no other elements in the segments array, the contents.php file in the current folder (pages/users/profile) is included. I'm not really seeing the benefit of doing this over having functions that include components of the site (e.g. include_header(), include_footer(), etc.), so I conclude that I'm doing something terribly wrong. I'm just not sure what it is.

    Read the article

  • Output error in comparing characters from two strings

    - by Andrew Martin
    I'm stuck with my a piece of code I'm creating. My IDE is Eclipse and when I use its debugging feature, to trace what's happening on each line, it outputs perfectly. However, when I click the "run" project, it just outputs a blank screen: public static void compareInterests(Client[] clientDetails) { int interests = 0; for (int p = 0; p < numberOfClients; p++) { for (int q = 0; q < numberOfClients; q++) { String a = clientDetails[p].getClientInterests(); String b = clientDetails[q].getClientInterests(); int count = 0; while (count < a.length()) { if (a.charAt(count) == b.charAt(count)) interests++; count++; } if ((interests >= 3) && (clientDetails[p].getClientName() != clientDetails[q].getClientName())) System.out.print (clientDetails[p].getClientName() + " is compatible with " + clientDetails[q].getClientName()); interests = 0; } } } The code is designed to import an object array which contains information on a client's name and a client's interests. The client's interests are stored in the format "01010", where each 1 means they are interested in that activity, each 0 means they are not. My code compares each character of every client's string with every other client's string and outputs the results for all client's that don't have the same name and have three or more interests in common. When I run this code through Java's debugger, it outputs fine - but when I click run project or compile, I just get a blank screen. Any ideas?

    Read the article

  • How to show specific link to user that's signed in and on specific page in rails?

    - by sevens
    I have the following code for part of my navigation: <% if user_signed_in? %> <li><%= link_to 'Edit Profile', edit_profile_path(current_user.profile) %></li> <li><%= link_to 'Edit Account', edit_user_registration_path %></li> <% elsif user_signed_in? and params[:controller] == 'profiles#edit' %> <li><%= link_to 'View Profile', profile_path(current_user.profile) %></li> <li><%= link_to 'Edit Account', edit_user_registration_path %></li> <% else %> <li><%= link_to 'Sign up', new_user_registration_path %></li> <% end %> I want different links to show depending on where the "user_signed_in" is. However, my <% elsif user_signed_in? and params[:controller] == 'profiles#edit' %> doesn't seem to be working. What am I doing wrong?

    Read the article

  • application_authenticaterequest does not fire when using Page.GetRouteUrl

    - by msoumare
    I'm converting some URLs from a web application using ASP.NET 4.0 friendly SEO Urls: From <a href="profile.aspx" ></a> To <a href="<%= Page.GetRouteUrl("Profile", null) %>" ></a> The problem is before conversion when I try to hit profile.aspx it would fire the application_authenticaterequest but after conversion when I try to hit Page.GetRouteUrl it would not fire the application_authenticaterequest. Thanks

    Read the article

  • SYN receives RST,ACK very frequently

    - by user1289508
    Hi Socket Programming experts, I am writing a proxy server on Linux for SQL Database server running on Windows. The proxy is coded using bsd sockets and in C, and it is working just fine. When I use a database client (written in JAVA, and running on a Linux box) to fire queries (with a concurrency of 100 or more) directly to the Database server, not experiencing connection resets. But through my proxy I am experiencing many connection resets. Digging deeper I came to know that connection from 'DB client' to 'Proxy' always succeeds but when the 'Proxy' tries to connect to the DB server the connection fails, due to the SYN packet getting RST,ACK. That was to give some background. The question is : Why does sometimes SYN receives RST,ACK? 'DB client(linux)' to 'Server(windows)' ---- Works fine 'DB client(linux) to 'Proxy(Linux)' to 'Server(windows)' ----- problematic I am aware that this can happen in "connection refused" case but this definitely is not that one. SYN flooding might be another scenario, but that does not explain fine behavior while firing to Server directly. I am suspecting some socket option setting may be required, that the client does before connecting and my proxy does not. Please put some light on this. Any help (links or pointers) is most appreciated. Additional info: Wrote a C client that does concurrent connections, which takes concurrency as an argument. Here are my observations: - At 5000 concurrency and above, some connects failed with 'connection refused'. - Below 2000, it works fine. But the actual problem is observed even at a concurrency of 100 or more. Note: The problem is time dependent sometimes it never comes at all and sometimes it is very frequent and DB client (directly to server) works fine at all times .

    Read the article

  • Which workaround to use for the following SQL deadlock?

    - by Marko
    I found a SQL deadlock scenario in my application during concurrency. I belive that the two statements that cause the deadlock are (note - I'm using LINQ2SQL and DataContext.ExecuteCommand(), that's where this.studioId.ToString() comes into play): exec sp_executesql N'INSERT INTO HQ.dbo.SynchronizingRows ([StudioId], [UpdatedRowId]) SELECT @p0, [t0].[Id] FROM [dbo].[UpdatedRows] AS [t0] WHERE NOT (EXISTS( SELECT NULL AS [EMPTY] FROM [dbo].[ReceivedUpdatedRows] AS [t1] WHERE ([t1].[StudioId] = @p0) AND ([t1].[UpdatedRowId] = [t0].[Id]) ))',N'@p0 uniqueidentifier',@p0='" + this.studioId.ToString() + "'; and exec sp_executesql N'INSERT INTO HQ.dbo.ReceivedUpdatedRows ([UpdatedRowId], [StudioId], [ReceiveDateTime]) SELECT [t0].[UpdatedRowId], @p0, GETDATE() FROM [dbo].[SynchronizingRows] AS [t0] WHERE ([t0].[StudioId] = @p0)',N'@p0 uniqueidentifier',@p0='" + this.studioId.ToString() + "'; The basic logic of my (client-server) application is this: Every time someone inserts or updates a row on the server side, I also insert a row into the table UpdatedRows, specifying the RowId of the modified row. When a client tries to synchronize data, it first copies all of the rows in the UpdatedRows table, that don't contain a reference row for the specific client in the table ReceivedUpdatedRows, to the table SynchronizingRows (the first statement taking part in the deadlock). Afterwards, during the synchronization I look for modified rows via lookup of the SynchronizingRows table. This step is required, otherwise if someone inserts new rows or modifies rows on the server side during synchronization I will miss them and won't get them during the next synchronization (explanation scenario to long to write here...). Once synchronization is complete, I insert rows to the ReceivedUpdatedRows table specifying that this client has received the UpdatedRows contained in the SynchronizingRows table (the second statement taking part in the deadlock). Finally I delete all rows from the SynchronizingRows table that belong to the current client. The way I see it, the deadlock is occuring on tables SynchronizingRows (abbreviation SR) and ReceivedUpdatedRows (abbreviation RUR) during steps 2 and 3 (one client is in step 2 and is inserting into SR and selecting from RUR; while another client is in step 3 inserting into RUR and selecting from SR). I googled a bit about SQL deadlocks and came to a conclusion that I have three options. Inorder to make a decision I need more input about each option/workaround: Workaround 1: The first advice given on the web about SQL deadlocks - restructure tables/queries so that deadlocks don't happen in the first place. Only problem with this is that with my IQ I don't see a way to do the synchronization logic any differently. If someone wishes to dwelve deeper into my current synchronization logic, how and why it is set up the way it is, I'll post a link for the explanation. Perhaps, with the help of someone smarter than me, it's possible to create a logic that is deadlock free. Workaround 2: The second most common advice seems to be the use of WITH(NOLOCK) hint. The problem with this is that NOLOCK might miss or duplicate some rows. Duplication is not a problem, but missing rows is catastrophic! Another option is the WITH(READPAST) hint. On the face of it, this seems to be a perfect solution. I really don't care about rows that other clients are inserting/modifying, because each row belongs only to a specific client, so I may very well skip locked rows. But the MSDN documentaion makes me a bit worried - "When READPAST is specified, both row-level and page-level locks are skipped". As I said, row-level locks would not be a problem, but page-level locks may very well be, since a page might contain rows that belong to multiple clients (including the current one). While there are lots of blog posts specifically mentioning that NOLOCK might miss rows, there seems to be none about READPAST (never) missing rows. This makes me skeptical and nervous to implement it, since there is no easy way to test it (implementing would be a piece of cake, just pop WITH(READPAST) into both statements SELECT clause and job done). Can someone confirm whether the READPAST hint can miss rows? Workaround 3: The final option is to use ALLOW_SNAPSHOT_ISOLATION and READ_COMMITED_SNAPSHOT. This would seem to be the only option to work 100% - at least I can't find any information that would contradict with it. But it is a little bit trickier to setup (I don't care much about the performance hit), because I'm using LINQ. Off the top of my head I probably need to manually open a SQL connection and pass it to the LINQ2SQL DataContext, etc... I haven't looked into the specifics very deeply. Mostly I would prefer option 2 if somone could only reassure me that READPAST will never miss rows concerning the current client (as I said before, each client has and only ever deals with it's own set of rows). Otherwise I'll likely have to implement option 3, since option 1 is probably impossible... I'll post the table definitions for the three tables as well, just in case: CREATE TABLE [dbo].[UpdatedRows]( [Id] [uniqueidentifier] NOT NULL ROWGUIDCOL DEFAULT NEWSEQUENTIALID() PRIMARY KEY CLUSTERED, [RowId] [uniqueidentifier] NOT NULL, [UpdateDateTime] [datetime] NOT NULL, ) ON [PRIMARY] GO CREATE NONCLUSTERED INDEX IX_RowId ON dbo.UpdatedRows ([RowId] ASC) WITH (STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] GO CREATE TABLE [dbo].[ReceivedUpdatedRows]( [Id] [uniqueidentifier] NOT NULL ROWGUIDCOL DEFAULT NEWSEQUENTIALID() PRIMARY KEY NONCLUSTERED, [UpdatedRowId] [uniqueidentifier] NOT NULL REFERENCES [dbo].[UpdatedRows] ([Id]), [StudioId] [uniqueidentifier] NOT NULL REFERENCES, [ReceiveDateTime] [datetime] NOT NULL, ) ON [PRIMARY] GO CREATE CLUSTERED INDEX IX_Studios ON dbo.ReceivedUpdatedRows ([StudioId] ASC) WITH (STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] GO CREATE TABLE [dbo].[SynchronizingRows]( [StudioId] [uniqueidentifier] NOT NULL [UpdatedRowId] [uniqueidentifier] NOT NULL REFERENCES [dbo].[UpdatedRows] ([Id]) PRIMARY KEY CLUSTERED ([StudioId], [UpdatedRowId]) ) ON [PRIMARY] GO PS! Studio = Client. PS2! I just noticed that the index definitions have ALLOW_PAGE_LOCK=ON. If I would turn it off, would that make any difference to READPAST? Are there any negative downsides for turning it off?

    Read the article

  • MySQL won't start or won't installed

    - by Owen
    Hi there, I'm trying to get a local LAMP setup on my Ubuntu desktop. I'm successfully got PHP install but I'm having trouble with MySQL If PHP tries to connet to MySQL I get this error: Warning: mysql_connect() [function.mysql-connect]: Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) in /var/www/testing.php on line 3 Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) If I try via command line I get much the same error: owen@desktop:~$ mysql ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (13) Weirdly "/var/run/mysqld" does not exist. Running a whereis command I get the following: owen@desktop:~$ whereis mysqld.sock mysqld: /usr/sbin/mysqld /usr/share/man/man8/mysqld.8.gz So is MySQL even installed? Well acording to dpkg owen@desktop:~$ dpkg -l | grep mysql ii libapache2-mod-auth-mysql 4.3.9-13ubuntu1 Apache 2 module for MySQL authentication ii libdbd-mysql-perl 4.016-1 Perl5 database interface to the MySQL database ii libmysqlclient15off 5.1.30really5.0.83-0ubuntu3 MySQL database client library ii libmysqlclient16 5.1.49-1ubuntu8.1 MySQL database client library ii mysql-admin 5.0r14+openSUSE-2.1 GUI tool for intuitive MySQL administration ii mysql-client-5.1 5.1.49-1ubuntu8.1 MySQL database client binaries ii mysql-client-core-5.1 5.1.49-1ubuntu8.1 MySQL database core client binaries ii mysql-common 5.1.49-1ubuntu8.1 MySQL database common files, e.g. /etc/mysql/my.cnf ii mysql-gui-tools-common 5.0r14+openSUSE-2.1 Architecture independent files for MySQL GUI Tools ii mysql-query-browser 5.0r14+openSUSE-2.1 Official GUI tool to query MySQL database ii mysql-server 5.1.49-1ubuntu8.1 MySQL database server (metapackage depending on the latest version) ii mysql-server-5.1 5.1.49-1ubuntu8.1 MySQL database server binaries and system database setup ii mysql-server-core-5.0 5.1.30really5.0.83-0ubuntu3 MySQL database core server files ii mysql-server-core-5.1 5.1.49-1ubuntu8.1 MySQL database server binaries ii php5-mysql Can someone please help I'm really confused as what to do next. I'm not a Linux expert at all most of these commands I've ran I found of diffrent blogs and help forums.

    Read the article

  • Apache2 Permission denied: access to / denied

    - by futureled
    Hi, after installing and starting apache2 i can't open the website and got the error "Forbidden You don't have permission to access / on this server." I tried some different options in the httpd.conf, but nothing helped me solving this problem. All permissions for every directory are "drwxr-xr-x". The directory /www contains a file names index.html with the same permissions. Please do not wonder, the time in the errorlog is not correctly. I have no idea what the problem is, i hope someone can help me. my httpd.conf: ServerRoot "/etc/apache2" Listen 80 User daemon Group daemon ServerAdmin [email protected] DocumentRoot "/var/www" Options FollowSymLinks AllowOverride None Order Deny,Allow Deny from all Options Indexes FollowSymLinks AllowOverride All Order allow,deny Allow from all DirectoryIndex index.html Order allow,deny Deny from all Satisfy All ErrorLog /var/apache2/logs/error_log LogLevel warn LogFormat "%h %l %u %t \"%r\" %s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %s %b" common <IfModule logio_module> LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio </IfModule> CustomLog /var/apache2/logs/access_log common ScriptAlias /cgi-bin/ "/usr/share/apache2/cgi-bin/" AllowOverride None Options None Order allow,deny Allow from all DefaultType text/plain TypesConfig /etc/apache2/mime.types AddType application/x-compress .Z AddType application/x-gzip .gz .tgz SSLRandomSeed startup builtin SSLRandomSeed connect builtin my error_log: [Sat Jan 01 00:50:26 2000] [notice] caught SIGTERM, shutting down [Sat Jan 01 00:50:33 2000] [warn] Init: Session Cache is not configured [hint: SSLSessionCache] [Sat Jan 01 00:50:34 2000] [notice] Apache/2.2.3 (Unix) mod_ssl/2.2.3 OpenSSL/0.9.8j configured -- resuming normal operations [Sat Jan 01 00:50:36 2000] [error] [client 192.168.1.44] (13)Permission denied: access to / denied [Sat Jan 01 00:50:37 2000] [error] [client 192.168.1.44] (13)Permission denied: access to / denied [Sat Jan 01 00:50:37 2000] [error] [client 192.168.1.44] (13)Permission denied: access to / denied [Sat Jan 01 00:50:37 2000] [error] [client 192.168.1.44] (13)Permission denied: access to / denied [Sat Jan 01 00:50:38 2000] [error] [client 192.168.1.44] (13)Permission denied: access to / denied [Sat Jan 01 00:50:38 2000] [error] [client 192.168.1.44] (13)Permission denied: access to / denied

    Read the article

  • Apache is reponding a blank white page

    - by Bruno Araujo
    I have the following situation: A site hosted in apache 2.4, with ssl, that works like a charm for a while now, but out of no where, without modifications to the site, apache started serving random blank pages. The workaround this is to delete the cookies of the browser or restart the browser. I've switched the vitualhost to log in debug mode but it didn't got me anywhere. Here is the debug log of a failed page load: [Wed Oct 24 10:57:35.762547 2012] [ssl:info] [pid 27854:tid 140617706374912] [client 192.168.10.150:58917] AH01964: Connection to child 147 established (server xxx.com.br:443) [Wed Oct 24 10:57:35.762739 2012] [ssl:debug] [pid 27854:tid 140617706374912] ssl_engine_kernel.c(1966): [client 192.168.10.150:58917] AH02043: SSL virtual host for servername xxx.com.br found [Wed Oct 24 10:57:35.777479 2012] [ssl:debug] [pid 27854:tid 140617706374912] ssl_engine_kernel.c(1899): [client 192.168.10.150:58917] AH02041: Protocol: TLSv1, Cipher: DHE-RSA-AES256-SHA (256/256 bits) [Wed Oct 24 10:57:35.779912 2012] [ssl:debug] [pid 27854:tid 140617706374912] ssl_engine_kernel.c(243): [client 192.168.10.150:58917] AH02034: Initial (No.1) HTTPS request received for child 147 (server xxx.com.br:443) [Wed Oct 24 10:57:35.780044 2012] [authz_core:debug] [pid 27854:tid 140617706374912] mod_authz_core.c(809): [client 192.168.10.150:58917] AH01628: authorization result: granted (no directives) [Wed Oct 24 10:57:40.783950 2012] [ssl:info] [pid 27854:tid 140617706374912] (70007)The timeout specified has expired: [client 192.168.10.150:58917] AH01991: SSL input filter read failed. [Wed Oct 24 10:57:40.784077 2012] [ssl:debug] [pid 27854:tid 140617706374912] ssl_engine_io.c(988): [remote 192.168.10.150:58917] AH02001: Connection closed to child 147 with standard shutdown (server xxx.com.br:443)

    Read the article

  • openvpn TCP/UDP slow SSH/SMB performance

    - by Petr Latal
    I have question about strange behavior of my openVPN configuration on Debian lenny. I have 2 server configs (one proto tcp-server based and one proto udp based). ISP bandwidth is 7Mbit/7Mbit. When I uses proto tcp-server my download server rate is fine around 6,4 Mbit/s, but upload rate is about 3Mbit/s. When I uses proto udp, my download server rate is around 3Mbit/s and upload rate around 6,4Mbit/s. I tried to handle the MTU, MSSFIX and cipher on/off on server and client configs to synchronize rates, but without solution. Here is TCP based SERVER config: mode server tls-server port 1194 proto tcp-server dev tap0 ifconfig 11.10.15.1 255.255.255.0 ifconfig-pool 11.10.15.2 11.10.15.20 255.255.255.0 push "route 192.168.1.0 255.255.255.0" push "dhcp-option DNS 192.168.1.200" push "route-gateway 11.10.15.1" push "dhcp-option WINS 192.168.1.200" route-up /etc/openvpn/routeup.sh duplicate-cn ca /etc/openvpn/ca.crt cert /etc/openvpn/server.crt key /etc/openvpn/server.key dh /etc/openvpn/dh2048.pem log-append /var/log/openvpn.log status /var/run/vpn.status 10 user nobody group nogroup keepalive 10 120 comp-lzo verb 3 script-security 3 plugin /usr/lib/openvpn/openvpn-auth-pam.so system-auth persist-tun persist-key mssfix cipher BF-CBC Here is UDP based SERVER config: port 1194 proto udp dev tun0 local xx.xx.xx.xx server 11.10.15.0 255.255.255.0 ca /etc/openvpn/ca.crt cert /etc/openvpn/server.crt key /etc/openvpn/server.key dh /etc/openvpn/dh2048.pem log-append /var/log/openvpn.log status /var/run/vpn.status 10 user nobody group nogroup keepalive 10 120 comp-lzo verb 3 duplicate-cn script-security 3 plugin /usr/lib/openvpn/openvpn-auth-pam.so system-auth persist-tun persist-key tun-mtu 1500 mssfix 1212 client-to-client ifconfig-pool-persist ipp.txt Here is TCP/UDP based windows CLIENT config: remote xx.xx.xx.xx --socket-flags TCP_NODELAY tls-client port 1194 proto tcp-client #proto udp dev tap #dev tun pull ca ca.crt cert latis.crt key latis.key mute 0 comp-lzo adaptive verb 3 resolv-retry infinite nobind persist-key auth-user-pass auth-nocache script-security 2 mssfix cipher BF-CBC

    Read the article

  • Skype performance in IPSEC VPN

    - by dunxd
    I've been challenged to "improve Skype performance" for calls within my organisation. Having read the Skype IT Administrators Guide I am wondering whether we might have a performance issue where the Skype Clients in a call are all on our WAN. The call is initiated by a Skype Client at our head office, and terminated on a Skype Client in a remote office connected via IPSEC VPN. Where this happens, I assume the trafficfrom Client A (encrypted by Skype) goes to our ASA 5510, where it is furtehr encrypted, sent to the remote ASA 5505 decrypted, then passed to Client B which decrypts the Skype encryption. Would the call quality benefit if the traffic didn't go over the VPN, but instead only relied on Skype's encryption? I imagine I could achieve this by setting up a SOCKS5 proxy in our HQ DMZ for Skype traffic. Then the traffic goes from Client A to Proxy, over the Skype relay network, then arrives at Cisco ASA 5505 as any other internet traffic, and then to Client B. Is there likely to be any performance benefit in doing this? If so, is there a way to do it that doesn't require a proxy? Has anyone else tackled this?

    Read the article

  • Always failed in connecting to the Outlook Anywhere through TMG 2010 with certificate ?

    - by Albert Widjaja
    Hi, I have successfully published Exchange Activesync using TMG 2010 and OWA internally only but somehow when I tried to publish the Outlook Anywhere it failed ( as can be seen from the https://www.testexchangeconnectivity.com ) Settings: IIS 7 settings, I have unchecked the require SSL and "Ignore" the client certificate Exchange CAS settings: ServerName : ExCAS02-VM SSLOffloading : True ExternalHostname : activesync.domain.com ClientAuthenticationMethod : Basic IISAuthenticationMethods : {Basic} MetabasePath : IIS://ExCAS02-VM.domainad.com/W3SVC/1/ROOT/Rpc Path : C:\Windows\System32\RpcProxy Server : ExCAS02-VM AdminDisplayName : ExchangeVersion : 0.1 (8.0.535.0) Name : Rpc (Default Web Site) DistinguishedName : CN=Rpc (Default Web Site),CN=HTTP,CN=Protocols,CN=ExCAS02-VM,CN=Servers,CN=Exchange Administrative....... Identity : ExCAS02-VM\Rpc (Default Web Site) Guid : 59873fe5-3e09-456e-9540-f67abc893f5e ObjectCategory : domainad.com/Configuration/Schema/ms-Exch-Rpc-Http-Virtual-Directory ObjectClass : {top, msExchVirtualDirectory, msExchRpcHttpVirtualDirectory} WhenChanged : 18/02/2011 4:31:54 PM WhenCreated : 18/02/2011 4:30:27 PM OriginatingServer : ADDC01.domainad.com IsValid : True Test-OutlookWebServices settings: 1013 Error When contacting https://activesync.domain.com/Rpc received the error The remote server returned an error: (500) Internal Server Error. 1017 Error [EXPR]-Error when contacting the RPC/HTTP service at https://activesync.domain.com/Rpc. The elapsed time was 0 milliseconds. https://www.testexchangeconnectivity.com testing result: Checking the IIS configuration for client certificate authentication. Client certificate authentication was detected. Additional Details Accept/Require client certificates were found. Set the IIS configuration to Ignore Client Certificates if you aren't using this type of authentication. environment: Windows Server 2008 (HT-CAS) Exchange Server 2007 SP1 TMG 2010 Standard Outlook 2007 client SP2. Any kind of help would be greatly appreciated. Thanks.

    Read the article

  • Grant access for users on a separate domain to SharePoint

    - by Geo Ego
    Hello. I just completed development of a SharePoint site on a virtual server and am currently in the process of granting users from a different domain to the site. The SharePoint domain is SHAREPOINT, and the domain with the users I want to give access to is COMPANY. I have provided them with a link to the site and added them as users via SharePoint, which is all I thought I would need to do. However, when they go to the link, the site shows them a SharePoint error page. In the security event log, I am showing the following: Event Type: Failure Audit Event Source: Security Event Category: Object Access Event ID: 560 Date: 3/18/2010 Time: 11:11:49 AM User: COMPANY\ThisUser Computer: SHAREPOINT Description: Object Open: Object Server: Security Account Manager Object Type: SAM_ALIAS Object Name: DOMAINS\Account\Aliases\00000404 Handle ID: - Operation ID: {0,1719489} Process ID: 416 Image File Name: C:\WINDOWS\system32\lsass.exe Primary User Name: SHAREPOINT$ Primary Domain: COMPANY Primary Logon ID: (0x0,0x3E7) Client User Name: ThisUser Client Domain: PRINTRON Client Logon ID: (0x0,0x1A3BC2) Accesses: AddMember RemoveMember ListMembers ReadInformation Privileges: - Restricted Sid Count: 0 Access Mask: 0xF Then, four of these in a row: Event Type: Failure Audit Event Source: Security Event Category: Object Access Event ID: 560 Date: 3/18/2010 Time: 11:12:08 AM User: NT AUTHORITY\NETWORK SERVICE Computer: SHAREPOINT Description: Object Open: Object Server: SC Manager Object Type: SERVICE OBJECT Object Name: WinHttpAutoProxySvc Handle ID: - Operation ID: {0,1727132} Process ID: 404 Image File Name: C:\WINDOWS\system32\services.exe Primary User Name: SHAREPOINT$ Primary Domain: COMPANY Primary Logon ID: (0x0,0x3E7) Client User Name: NETWORK SERVICE Client Domain: NT AUTHORITY Client Logon ID: (0x0,0x3E4) Accesses: Query status of service Start the service Query information from service Privileges: - Restricted Sid Count: 0 Access Mask: 0x94 Any ideas what permissions I need to grant to the user to get them access to SharePoint?

    Read the article

  • Ghost Solution Suite: Booting over PXE to WinPE for re-imaging, then booting to installed OS

    - by uberdanzik
    I have 40 networked computers that need to be re-imaged each night over a network via an automatic and unattended process. This is to reset the computers to a default state, as well as update the computers to the latest software loads. I'm using Symantec Ghost Solution Suite 2.5. My process so far is the following: Client begins in a powered down WakeOnLan accepting state. Ghost Console task uses WakeOnLan and PXE to boot the client into the WinPE environment. The client connects to the ghost console and reimages itself successfully. The client closes WinPE and restarts. PROBLEM: The client boots into the WinPE environment again, instead of the newly installed OS (Win7) I need it to boot into Win7 once so that I can run a few configuration batch files, then shut down into the WakeOnLan state again. Ghost console even reports an error on the process, that it never rebooted into the OS. Right now it seems that there is not an option to stop it from booting into the PXE server's WinPE image after re-imaging. Even if I set up a PXE boot menu with other boot options, its pointless, because it will always boot the default option. I would expect the ghost console task to be able to influence the PXE boot choice somehow. What do they expect us to do, turn the PXE server on and off manually? How can I get the client to boot to the OS after re-imaging? Thank you.

    Read the article

  • How to redirect all Internet traffic to OpenVPN Server

    - by JuliaS
    I have seen working solutions around the issue of forcing Internet traffic to go through the OpenVPN server but they are all done in Linux, all I want to know is how to add an entry to the route table in windows to make this happen. connectivity between the client and server is fine, my Windows 7 client can establish a connection to the Windows 2008 Server, but when established Internet traffic is still going from the local Windows 7 machine. Here are the details: Server: Windows 2008 Server with one NIC OpenVPN IP Address: 192.168.0.1 Local NIC IP Address (connects the server to the Internet): 10.242.69.107 Client: Windows 7 with one NIC OpenVPN IP Address: 192.168.0.2 ISP allocated IP Address: 10.0.8.2 (gateway 10.0.8.1) Server OpenVPN Config: dev tun ifconfig 192.168.0.1 192.168.0.2 secret static.key push "redirect-gateway def1" Client OpenVPN Config: remote xxx.xxx.com dev tun ifconfig 192.168.0.2 192.168.0.1 secret static.key I'm not an expert with adding routes...etc. I would be grateful if someone could let me know how to add this entry in my server/client route table. EDIT: Output from the client's netstat -rnv IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 10.0.8.1 10.0.8.2 20 10.0.8.0 255.255.255.252 On-link 10.0.8.2 276 10.0.8.2 255.255.255.255 On-link 10.0.8.2 276 10.0.8.3 255.255.255.255 On-link 10.0.8.2 276 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 192.168.0.0 255.255.255.252 On-link 192.168.0.2 286 192.168.0.2 255.255.255.255 On-link 192.168.0.2 286 192.168.0.3 255.255.255.255 On-link 192.168.0.2 286 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 10.0.8.2 276 224.0.0.0 240.0.0.0 On-link 192.168.0.2 286 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 10.0.8.2 276 255.255.255.255 255.255.255.255 On-link 192.168.0.2 286 ===========================================================================

    Read the article

< Previous Page | 183 184 185 186 187 188 189 190 191 192 193 194  | Next Page >