Search Results

Search found 9658 results on 387 pages for 'authentication provider'.

Page 343/387 | < Previous Page | 339 340 341 342 343 344 345 346 347 348 349 350  | Next Page >

  • Using awk to return only certain chunks of data

    - by Koriar
    I'm not 100% certain how to phrase my question simply, so I apologize if this has been answered somewhere and I was just unable to find it. What I have are debug logs with authentication packets in them along with a bunch of other output. I need to search through about 2 million lines of logs to find every packet that contains a certain mac address. The packets look something like this (slightly censored): -----------------[ header ]----------------- Event: Authd-Response (1900) Sequence: -54 Timestamp: 1969-12-31 19:30:00 (0) ---------------[ attributes ]--------------- Auth-Result = Auth-Accept Service-Profile-SID = 53 Service-Profile-SID = 49 RADIUS-Access-Accept-Attr/WiMAX-Capability = 0x(numbers) Session-Timeout = 3600 Service-Profile-SID = 4 Service-Profile-SID = 29 Chargeable-User-Identity = "(Numbers)" User-Password = "(the MAC address I'm looking for)" -------------------------------------------- However there are about 10 different possible types with different possible lengths. They all start with the header line and end with the all-dashes line. I've had success using awk to get the code blocks themselves using this: awk '/-----------------\[ header \]-----------------/,/--------------------------------------------/' filename.txt But I was hoping to be able to use it to return only the packets which contain the MAC address that I need. I've been trying to figure this out for a few days now and I'm pretty stuck. I could try and write a bash script, but I could swear that I've used awk to do something like this before...

    Read the article

  • Not able to connect to TFS Server from TFS Proxy

    - by GV India
    In our office we have setup TFS for project development. The TFS Server is WIN 2003 server SP2 with VSTFS 2008 and is running fine. Now we need to setup a TFS Proxy server on client site for client to access. Before going for the client setup, I wanted to build and test proxy in our office on a dummy server (will call it Proxy server hereon) by keeping it on a different domain. OS configuration of the Proxy server is the same as TFS server. I have installed and configured TFS proxy on Proxy server to connect to TFS Server. Also we have built trust between the two different domains to enable communication. Now problem is that I am not able to at all connect to TFS server. I am trying to connect from Internet Explorer of proxy server using proxy service account. It gives me error: The page cannot be displayed. HTTP 500 - Internal server error. The page I was browsing was http://tfs:8080/VersionControl/v1.0/ProxyStatistics.asmx. I think I have done all the required steps correctly to configure proxy as described in MSDN and also TFS installation guide. Here Proxy service account is a member of ‘Team Foundation Valid Users’ group. I am able to connect to TFS Server (specifying port) using Telnet from command prompt on proxy server as suggested by few sites. The TFS server web sites have been configured to use Integration Windows Authentication. Event Logs on both the servers are also not giving any error. Overall I’m not able to get it done. Any ideas on what might be the problem???

    Read the article

  • How to prevent ADO.NET from altering double values when it reads from Excel files

    - by Khnle
    I have the following rows in my Excel input file: Column1 Column2 0-5 3.040 6 2.957 7 2.876 and the following code which uses ADO.NET to read it: string fileName = "input.xls"; var connectionString = string.Format("Provider=Microsoft.Jet.OLEDB.4.0; data source={0}; Extended Properties=Excel 8.0;", fileName); var dbConnection = new OleDbConnection(connectionString); dbConnection.Open(); try { var dbCommand = new OleDbCommand("SELECT * FROM [Sheet1$]", dbConnection); var dbReader = dbCommand.ExecuteReader (); while (dbReader.Read()) { string col1 = dbReader.GetValue(0).ToString(); string col2 = dbReader.GetValue(1).ToString(); } } finally { dbConnection.Close(); } The results are very disturbing. Here's why: The values of each column in the first time through the loop: col1 is empty (blank) col2 is 3.04016411633586 Second time: col1 is 6 col2 is 2.95722928448829 Third time: col1 is 7 col2 is 2.8763272933077 The first problem happens with col1 in the first iteration. I expect 0-5. The second problem happens at every iteration with col2 where ADO.NET obviously alters the values as it reads them. How to stop this mal-behavior?

    Read the article

  • Shrinking the transaction log of a mirrored SQL Server 2005 database

    - by Peter Di Cecco
    I've been looking all over the internet and I can't find an acceptable solution to my problem, I'm wondering if there even is a solution without a compromise... I'm not a DBA, but I'm a one man team working on a huge web site with no extra funding for extra bodies, so I'm doing the best I can. Our backup plan sucks, and I'm having a really hard time improving it. Currently, there are two servers running SQL Server 2005. I have a mirrored database (no witness) that seems to be working well. I do a full backup at noon and at midnight. These get backed up to tape by our service provider nightly, and I burn the backup files to dvd weekly to keep old records on hand. Eventually I'd like to switch to log shipping, since mirroring seems kinda pointless without a witness server. The issue is that the transaction log is growing non-stop. From the research I've done, it seems that I can't truncate a log file of a mirrored database. So how do I stop the file from growing!? Based on this web page, I tried this: USE dbname GO CHECKPOINT GO BACKUP LOG dbname TO DISK='NULL' WITH NOFORMAT, INIT, NAME = N'dbnameLog Backup', SKIP, NOREWIND, NOUNLOAD GO DBCC SHRINKFILE('dbname_Log', 2048) GO But that didn't work. Everything else I've found says I need to disable the mirror before running the backup log command in order for it to work. My Question (TL;DR) How can I shrink my transaction log file without disabling the mirror?

    Read the article

  • Going "behind Hibernate's back" to update foreign key values without an associated entity

    - by Alex Cruise
    Updated: I wound up "solving" the problem by doing the opposite! I now have the entity reference field set as read-only (insertable=false updatable=false), and the foreign key field read-write. This means I need to take special care when saving new entities, but on querying, the entity properties get resolved for me. I have a bidirectional one-to-many association in my domain model, where I'm using JPA annotations and Hibernate as the persistence provider. It's pretty much your bog-standard parent/child configuration, with one difference being that I want to expose the parent's foreign key as a separate property of the child alongside the reference to a parent instance, like so: @Entity public class Child { @Id @GeneratedValue Long id; @Column(name="parent_id", insertable=false, updatable=false) private Long parentId; @ManyToOne(cascade=CascadeType.ALL) @JoinColumn(name="parent_id") private Parent parent; private long timestamp; } @Entity public class Parent { @Id @GeneratedValue Long id; @OrderBy("timestamp") @OneToMany(mappedBy="parent", cascade=CascadeType.ALL, fetch=FetchType.LAZY) private List<Child> children; } This works just fine most of the time, but there are many (legacy) cases when I'd like to put an invalid value in the parent_id column without having to create a bogus Parent first. Unfortunately, Hibernate won't save values assigned to the parentId field due to insertable=false, updatable=false, which it requires when the same column is mapped to multiple properties. Is there any nice way to "go behind Hibernate's back" and sneak values into that field without having to drop down to JDBC or implement an interceptor? Thanks!

    Read the article

  • Why doesn't access database update?

    - by Ryan
    Recently I met a strange problem, see code snips as below: var sqlCommand: string; connection: TADOConnection; qry: TADOQuery; begin connection := TADOConnection.Create(nil); try connection.ConnectionString := 'Provider=Microsoft.Jet.OLEDB.4.0;Data Source=Test.MDB;Persist Security Info=False'; connection.Open(); qry := TADOQuery.Create(nil); try qry.Connection := connection; qry.SQL.Text := 'Select * from aaa'; qry.Open; qry.Append; qry.FieldByName('TestField1').AsString := 'test'; qry.Post; beep; finally qry.Free; end; finally connection.Free; end; end; First, Create a new access database named test.mdb and put it under the directory of this test project, we can create a new table named aaa in it which has only one text type field named TestField1. We set a breakpoint at line of "beep", then lunch the test application under ide debug mode, when ide stops at the breakpoint line (qry.post has been executed), at this time we use microsoft access to open test.mdb and open table aaa you will find there are no any changes in table aaa, if you let the ide continue running after pressing f9 you can find a new record is inserted in to table aaa, but if you press ctrl+f2 to terminate the application at the breakpoint, you will find the table aaa has no record been inserted, but in normal circumstance, a new record should be inserted in to the table aaa after qry.post executed. who can explain this problem , it troubles me so long time. thanks !!! BTW, the ide is delphi 2010, and the access mdb file is created by microsoft access 2007 under windows 7

    Read the article

  • Credit Card storage solution

    - by jtnire
    Hi Everyone, I'm developing a solution that is designed to store membership details, as well as credit card details. I'm trying to comply with PCI DSS as much as I can. Here is my design so far: PAN = Primary account number == long number on credit card Server A is a remote server. It stores all membership details (Names, Address etc..) and provides indivudal Key A's for each PAN stored Server B is a local server, and actually holds the encrypted PANs, as well as Key B, and does the decryption. To get a PAN, the client has to authenticate with BOTH servers, ask Server A for the respective Key A, then give Key A to server B, which will return the PAN to the client (provided authentication was sucessful). Server A will only ever encrypt Key A with Server B's public Key, as it will have it beforehand. Server B will probably have to send a salt first though, however I doin't think that has to be encrypted I havn't really thought about any implementation (i.e. coding) specifics yet regarding the above, however the solution is using Java's Cajo framework (wrapper for RMI) so that is how the servers will communicate with each other (Currently, membership details are transfered in this way). The reason why I want Server B to do the decryption, and not the client, is that I am afraid of decryption keys going into the client's RAM, even though it's probably just as bad on the server... Can anyone see anything wrong with the above design? It doesn't matter if the above has to be changed. Thanks jtnire

    Read the article

  • what is a root directory in IIS 6 and How do I make one of my subfolder in ASP.NET website the root directory?

    - by R_Coder
    I need to integrate a third party plugin in my asp.net website. To install the plugin, they have mentioned this sentence, "Create an application through your IIS control panel with root directory at -(some path from my website folder)?". I am not much aware with IIS and rarely worked with it. Though I tried every possible way i could do in IIS, I am not able to work it out. After installation, there is a test page provided by plugin which i have to run to check but when I run it, it shows this error. "It is an error to use a section registered as allowDefinition='MachineToApplication' beyond application level. This error can be caused by a virtual directory not being configured as an application in IIS." I searched this error too and found that it is because the two Web.Config file, one from the main project and another from plugin folder. The only way to work with this is to make the plugin folder they specified as root directory in IIS. Someone kindly tell me some easy steps to do this. What I was doing is, in IIS6, I added New website with the main folder of my asp.net website, then I right clickadd application and choosed the gievn path, thought it would become root directory but it ain't. Help would be appreciated. ALso note that, i have to put the plugin folder in my main website folder only. So, there are two web.config. I tried to rename one of them too, it solved the above error but gave another errors but I think main problem is of root directory. P.S they show me above error on web.config file of plugin folder on this sentence- "Line 51: < authentication mode="Windows" />"

    Read the article

  • IIS to SQL Server kerberos auth issues

    - by crosan
    We have a 3rd party product that allows some of our users to manipulate data in a database (on what we'll call SvrSQL) via a website on a separate server (SvrWeb). On SvrWeb, we have a specific, non-default website setup for this application so instead of going to http://SvrWeb.company.com to get to the website we use http://application.company.com which resolves to SvrWeb and the host headers resolve to the correct website. There is also a specific application pool set up for this site which uses an Active Directory account identity we'll call "company\SrvWeb_iis". We're setup to allow delegation on this account and to allow it to impersonate another login which we want it to do. (we want this account to pass along the AD credentials of the person signed into the website to SQL Server instead of a service account. We also set up the SPNs for the SrvWeb_iis account via the following command: setspn -A HTTP/SrvWeb.company.com SrvWeb_iis The website pulls up, but the section of the website that makes the call to the database returns the message: Cannot execute database query. Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. I thought we had the SPN information set up correctly, but when I check the security event log on SrvWeb I see entries of my logging in, but it seems to be using NTLM and not kerberos: Logon Type: 3 Logon Process: NtLmSsp Authentication Package: NTLM Any ideas or articles that cover this setup in detail would be extremely appreciated! If it helps, we are using SQL Server 2005, and both the web and SQL servers are Windows 2003.

    Read the article

  • Sorcery/Capybara: Cannon log in with :js => true

    - by PlankTon
    I've been using capybara for a while, but I'm new to sorcery. I have a very odd problem whereby if I run the specs without Capybara's :js = true functionality I can log in fine, but if I try to specify :js = true on a spec, username/password cannot be found. Here's the authentication macro: module AuthenticationMacros def sign_in user = FactoryGirl.create(:user) user.activate! visit new_sessions_path fill_in 'Email Address', :with => user.email fill_in 'Password', :with => 'foobar' click_button 'Sign In' user end end Which is called in specs like this: feature "project setup" do include AuthenticationMacros background do sign_in end scenario "creating a project" do "my spec here" end The above code works fine. However, IF I change the scenario spec from (in this case) scenario "adding questions to a project" do to scenario "adding questions to a project", :js => true do login fails with an 'incorrect username/password' combination. Literally the only change is that :js = true. I'm using the default capybara javascript driver. (Loads up Firefox) Any ideas what could be going on here? I'm completely stumped. I'm using Capybara 2.0.1, Sorcery 0.7.13. There is no javascript on the sign in page and save_and_open_page before clicking 'sign in' confirms that the correct details are entered into the username/password fields. Any suggestions really appreciated - I'm at a loss.

    Read the article

  • Error accessing uncompiled pages using ISA Server 2006 SP1

    - by Bravax
    We are in the processing of configuring a portal to use ISA Server as our front end security provider. So we are using ISA Server 2006 SP1. Unfortunately when we access .net applications through ISA Server, the first time they are accessed. i.e. They are not compiled yet, the following error appears: Error Code: 500 Internal Server Error. The parameter is incorrect. (87) In the ISA Monitoring logs, this shows: Failed Connection Attempt Log type: Web Proxy (Reverse) Status: 87 The parameter is incorrect. Once the application is compiled, the error never appears. Does anyone know how to resolve this, so the site works correctly the first time? Some additional information: The websites accessed are running on windows server 2008 64 bit - standard edition, and occurs for Sharepoint as well as standard .net websites. ISA Server is running on Windows server 2003 R2 SP2 Standard eidtion The firewall on the windows server 2008 box allows all access. (To rule this out.) Nothing odd appears in the IIS logs or firewall logs.

    Read the article

  • How can I use a custom configured RememberMeAuthenticationFilter in spring security?

    - by Sebastian
    I want to use a slightly customized rememberme functionality with spring security (3.1.0). I declare the rememberme tag like this: <security:remember-me key="JNJRMBM" user-service-ref="gymUserDetailService" /> As I have my own rememberme service I need to inject that into the RememberMeAuthenticationFilter which I define like this: <bean id="rememberMeFilter" class="org.springframework.security.web.authentication.rememberme.RememberMeAuthenticationFilter"> <property name="rememberMeServices" ref="gymRememberMeService"/> <property name="authenticationManager" ref="authenticationManager" /> </bean> I have spring security integrated in a standard way in my web.xml: <filter-name>springSecurityFilterChain</filter-name> <filter-class>org.springframework.web.filter.DelegatingFilterProxy</filter-class> Everything works fine, except that the RememberMeAuthenticationFilter uses the standard RememberMeService, so I think that my defined RememberMeAuthenticationFilter is not being used. How can I make sure that my definition of the filter is being used? Do I need to create a custom filterchain? And if so, how can I see my current "implicit" filterchain and make sure I use the same one except my RememberMeAuthenticationFilter instead of the default one? Thanks for any advice and/or pointers!

    Read the article

  • "requiresuniqueemail=true" implementation in asp.net site

    - by domineer
    Hi people I got a social networking site that is running live right now.The first time I launched my site I let requiresuniqueemail=false set-up on my web.config inorder for me to create dummy accounts for testing purposes and to start up the site you know.However the site is kind of stable right now w/ almost 5k members.So I would like to set-up the requiresuniqueemail to true so that users cannot reuse their existing email address and for me to make it sure that there will be unique email ad for each site user.I know the site got like 100 users with the same email address.My question is what could be the problem I'm going to face if I do this right now(requiresuniqueemail="true") and how to do this efficiently(without errors and if possible sitewide say in the global assax)?I tested and I already got an error if I logout an account.Like say a user try to click log-out this code runs: Dim d As DateTime = DateTime.Now.AddMinutes(-1 * Membership.UserIsOnlineTimeWindow) Dim theuser As MembershipUser = Membership.GetUser() theuser.LastActivityDate = d Membership.UpdateUser(theuser) If Not Cache(Page.User.Identity.Name.ToLower() + "currentstatus") Is Nothing Then Cache.Remove(Page.User.Identity.Name.ToLower() + "currentstatus") End If Then an exception occured on updateuser() function saying System.Configuration.Provider.ProviderException: The E-mail supplied is invalid. This is just one instance I know that I encountered a problem. Hoping to hear your ideas guys.....

    Read the article

  • DotNetOpeId - Using OpenIdButton for Google Apps

    - by JediPotPie
    I am an ASP.NET newbie and I am trying to design an OpenID/SSO system for an internal web application. The web application is pretty simple and the authentication is currently being managed by a database with usernames and passwords. I want to replace the existing accounts stored in the database with Google Apps accounts. I have downloaded the latest DotNetOpenAuth-3.4.3.10103 package and got the OpenIdRelyingPartyWebForms sample up and running on IIS. I have built my own login page using just a OpenIdButton object that points to a development Google domain. The button seems to work fine in FireFox, at least it is forwarding me to the Google Apps login, but nothing happens when I load the same page in IE. When I click the Google button, nothing happens, zip. The same is true for the Yahoo button in the login.aspx page given in the sample. Here is the .aspx code I am using... <rp:OpenIdButton runat="server" ImageUrl="http://www.google.com/accounts/google_transparent.gif" Text="Login with Google!" ID="googleLoginButton" Identifier="https://www.google.com/accounts/o8/site-xrds?hd=dev.connexcloud.com" />

    Read the article

  • Is it better to use a relational database or document-based database for an app like Wufoo?

    - by mboyle
    I'm working on an application that's similar to Wufoo in that it allows our users to create their own databases and collect/present records with auto generated forms and views. Since every user is creating a different schema (one user might have a database of their baseball card collection, another might have a database of their recipes) our current approach is using MySQL to create separate databases for every user with its own tables. So in other words, the databases our MySQL server contains look like: main-web-app-db (our web app containing tables for users account info, billing, etc) user_1_db (baseball_cards_table) user_2_db (recipes_table) .... And so on. If a user wants to set up a new database to keep track of their DVD collection, we'd do a "create database ..." with "create table ...". If they enter some data in and then decide they want to change a column we'd do an "alter table ....". Now, the further along I get with building this out the more it seems like MySQL is poorly suited to handling this. 1) My first concern is that switching databases every request, first to our main app's database for authentication etc, and then to the user's personal database, is going to be inefficient. 2) The second concern I have is that there's going to be a limit to the number of databases a single MySQL server can host. Pretending for a moment this application had 500,000 user databases, is MySQL designed to operate this way? What if it were a million, or more? 3) Lastly, is this method going to be a nightmare to support and scale? I've never heard of MySQL being used in this way so I do worry about how this affects things like replication and other methods of scaling. To me, it seems like MySQL wasn't built to be used in this way but what do I know. I've been looking at document-based databases like MongoDB, CouchDB, and Redis as alternatives because it seems like a schema-less approach to this particular problem makes a lot of sense. Can anyone offer some advice on this?

    Read the article

  • identify accounts using zend-openid [HELP]

    - by tawfekov
    Hi! , i had successfully implemented simple open ID authentication with the [gmail , yahoo , myopenid, aol and many others ] exactly as stackoverflow do but i had a problem with identifying accounts for example 'openid_ns' => string 'http://specs.openid.net/auth/2.0' (length=32) 'openid_mode' => string 'id_res' (length=6) 'openid_op_endpoint' => string 'https://www.google.com/accounts/o8/ud' (length=37) 'openid_response_nonce' => string '2010-05-02T06:35:40Z9YIASispxMvKpw' (length=34) 'openid_return_to' => string 'http://dev.local/download/index/gettingback' (length=43) 'openid_assoc_handle' => string 'AOQobUf_2jrRKQ4YZoqBvq5-O9sfkkng9UFxoE9SEBlgWqJv7Sq_FYgD7rEXiLLXhnjCghvf' (length=72) 'openid_signed' => string 'op_endpoint,claimed_id,identity,return_to,response_nonce,assoc_handle' (length=69) 'openid_sig' => string 'knieEHpvOc/Rxu1bkUdjeE9YbbFX3lqZDOQFxLFdau0=' (length=44) 'openid_identity' => string 'https://www.google.com/accounts/o8/id?id=AItOawk0NAa51T3xWy-oympGpxFj5CftFsmjZpY' (length=80) 'openid_claimed_id' => string 'https://www.google.com/accounts/o8/id?id=AItOawk0NAa51T3xWy-oympGpxFj5CftFsmjZpY' (length=80) i thought that openid_claimed_id or openid_identity is identical for every account but after some testing turns out the claimed id is changeable by the users them self in yahoo and randomly changing in case of gmail FYI am using zend openid with a bit modification on consumer class and i can't use SERG [Simple Refistration Extension] cuz currently it uses openid v1 specification :( i hadn't read the specification of openid so if any body know please help me to how to identify openid accounts to know if this account has signed in before or not

    Read the article

  • How to reliably identify users across Internet?

    - by amn
    I know this is a big one. In fact, it may be used for some SO community wiki. Anyways, I am running a website that DOES NOT use explicit authentication of users. It's public as in open to everybody. However, due to the nature of the service, some users need to be locked out due to misbehavior. I am currently blocking IP addresses, but I am aware of the supposed fact that many people purposefully reset their DHCP client cache to have their ISP assign them new addresses. Is that a fact? I think it certainly is a lucrative possibility for some people who want to circumvent being denied access. So IPs turn out to be a suboptimal way of dealing with this. But there is nothing else, is it? MAC addresses don't survive on WAN (change from hop to hop?), and even if they did - these can also be spoofed, although I think less easily than IP renewal. Cookies and even Flash cookies are out of the question, because there are tons of "tutorials" how to wipe these, and those intent on wreaking havoc on Internet are well aware and well equipped against such rudimentary measures I would employ. Is there anything else to lean on? I was thinking heuristical profiling - collecting available data from client-side and forming some key with it, but have not gone as far as to implementing it - is it an option?

    Read the article

  • Outgoing Emails (Git Patches) Blocked by Windows Live

    - by SteveStifler
    Just recently I dove into the VideoLAN open source project. This was my first time using git, and when sending in my first patch (using git send-email --to [email protected] patches), I was sent the following message from my computer's local mail in the terminal (I'm on OSX 10.6 by the way): Mail rejected by Windows Live Hotmail for policy reasons. We generally do not accept email from dynamic IP's as they are not typically used to deliver unauthenticated SMTP e-mail to an Internet mail server. http:/www.spamhaus.org maintains lists of dynamic and residential IP addresses. If you are not an email/network admin please contact your E-mail/Internet Service Provider for help. Email/network admins, please visit http://postmaster.live.com for email delivery information and support They must think I'm a spammer. I have a dynamic IP and my ISP (Charter) won't let me get a static one, so I tried editing git preferences: git config --global user.email "[email protected]" to my gmail account. However I got the exact same message again. My guess is that it has something to do with the native mail's preferences, but I have no idea how to access them or modify them. Anybody have any ideas for solving this? Thanks!

    Read the article

  • MS Access Bulk Insert exits app

    - by Brij
    In the web service, I have to bulk insert data in MS Access database. First, There was single complex Insert-Select query,There was no error but app exited after inserting some records. I have divided the query to make it simple and using linq to remove complexity of query. now I am inserting records using for loop. there are approx 10000 records. Again, same problem. I put a breakpoint after loop, but No hit. I have used try catch, but no error found. For Each item In lstSelDel Try qry = String.Format("insert into Table(Id,link,file1,file2,pID) values ({0},""{1}"",""{2}"",""{3}"",{4})", item.WebInfoID, item.Links, item.Name, item.pName, pDateID) DAL.ExecN(qry) Catch ex As Exception Throw ex End Try Next Public Shared Function ExecN(ByVal SQL As String) As Integer Dim ret As Integer = -1 Dim nowConString As String = "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" + System.AppDomain.CurrentDomain.BaseDirectory + "DataBase\\mydatabase.mdb;" Dim nowCon As System.Data.OleDb.OleDbConnection nowCon = New System.Data.OleDb.OleDbConnection(nowConString) Dim cmd As New OleDb.OleDbCommand(SQL, nowCon) nowCon.Open() ret = cmd.ExecuteNonQuery() nowCon.Close() nowCon.Dispose() cmd.Dispose() Return ret End Function After exiting app, I see w3wp.exe uses more than 50% of Memory. Any idea, What is going wrong? Is there any limitation of MS Access?

    Read the article

  • Firefox proxy dilemma

    - by Mike L.
    Any idea why when using system proxy settings in firefox, it can not accept a proxy such as: user:[email protected]:port ??? IE will allow and connect to a proxy in this format. Not only does firefox not work, but it does not prompt for the password, nor attempt to make a connection to the proxy. Basically get a "proxy server not found" error. Anybody know a way around this? I am working on a proxy switching program for IE & Firefox, and I would like to use system-wide proxy settings. If I just store the server:port combination, firefox prompts for the password, as well as IE. Then they can be cached and it will not ask again. Maybe my only option is to programmatically cache the user/pass? Anybody know a way to do this? I am pretty sure IE stores them at HTTP basic authentication passwords and I can add them with AddCredential. After saving a password for a proxy in firefox, it shows up in saved passwords in a format like "moz-proxy://server:port" anybody know how to programmatically add a saved password to firefox? Thanks

    Read the article

  • JasperReports reports access image via authenticated URL

    - by user363115
    Hi all, I am hoping that this is a simple issue with a simple solution and that I have missed something obvious. Let me explain the problem; We have an application that generates PDF reports (using Jasper). These reports contain data from our database, as well as imagery (photographs). These photographs are stored in S3. We use signed URLs to access these photographs. We link these photographs into our Jasper reports using these S3 URLs. Because the S3 URLs are signed and time-limited (by design), the process is as follows; User requests a report to be generated, Report is filled, and goes to our database (at which time UUIDs to any required images are retrieved), For each UUID an S3 signed URL must be generated, To do this the URL behind each report image is a call to an authenticated URL in our app (/get_img?uuid=foo), The controller behind this URL generates a signed S3 URL and returns it, Reports loads the image. The problem is with step (4) - the call to the authenticated URL fails because Jasper does not pass any authentication information with the request. Is there a solution here? Thanks all for your time. Ben

    Read the article

  • How to get at JSON in grails 2.0

    - by Mikey
    I am sending myself JSON like so with jQuery: $.ajax ({ type: "POST", url: 'http://localhost:8080/myproject/myController/myAction', dataType: 'json', async: false, //json object to sent to the authentication url data: {"stuff":"yes", "listThing":[1,2,3], "listObjects":[{"one":"thing"},{"two":"thing2"}]}, success: function () { alert("Thanks!"); } }) I send this to a controller and do println params And I know I'm already in trouble... [stuff:yes, listObjects[1][two]:thing2, listObjects[0][one]:thing, listThing[]:[1, 2, 3], action:myAction, controller:myController] I cannot figure out how to get at most of these values... I can get "yes" with params.stuff, but I cant do params.listThing.each{} or params.listObjects.each{} What am I doing wrong? UPDATE: I make the controller do this to try the two suggestions so far: println params println params.stuff println params.list('listObjects') println params.listThing def thisWontWork = JSON.parse(params.listThing) render("omg l2json") look how weird the parameters look at the end of the null pointer exception when I try the answers: [stuff:yes, listObjects[1][two]:thing2, listObjects[0][one]:thing, listThing[]:[1, 2, 3], action:l2json, controller:rateAPI] yes [] null | Error 2012-03-25 22:16:13,950 ["http-bio-8080"-exec-7] ERROR errors.GrailsExceptionResolver - NullPointerException occurred when processing request: [POST] /myproject/myController/myAction - parameters: stuff: yes listObjects[1][two]: thing2 listObjects[0][one]: thing listThing[]: 1 listThing[]: 2 listThing[]: 3 UPDATE 2 I am learning things, but this can't be right: println params['listThing[]'] println params['listObjects[0][one]'] prints [1, 2, 3] thing It seems like this is some part of grails new JSON marshaling. This is somewhat inconvenient for my purposes of hacking around with the values. How would I get all these individual params back into a big groovy object of nested maps and lists? Maybe I am not doing what I want with jQuery?

    Read the article

  • Get 1 array from 2 arrays (using RestKit 0.20)

    - by Reez
    I'm using RestKit and was wondering how to combine two array's into one array. I already have the data being pulled in separately from API1 and API2, but I don't know how to combine them into 1 tableView. Each API is pulling in media, and I want the combined tableView to show the most recent media (like any standard timeline does these days). I will post any extra code or help as necessary, thanks so much! Below shows API1 + API2 being pulled in correctly, but not combined into the tableView. Only data from API1 shows in the tableView. ViewController.m @interface StackOverflowViewController () @property (strong, nonatomic) NSArray *hArray; @property (strong, nonatomic) NSArray *springs; @property (strong, nonatomic) RKObjectManager *eObjectManager; @property (strong, nonatomic) NSArray *iArray; @property (strong, nonatomic) NSArray *imagesArray; @property (strong, nonatomic) RKObjectManager *iObjectManager; // Wain @property (strong, nonatomic) NSMutableArray *tableDataList; // Laarme @property (nonatomic, strong) NSMutableArray *contentArray; @property (nonatomic, strong) NSDateFormatter *dateFormatter1; // Dan @property (nonatomic, strong) NSMutableArray *combinedModel; @end @implementation StackOverflowViewController @synthesize tableView = _tableView; @synthesize spring; @synthesize leaf; @synthesize theme; @synthesize hArray; @synthesize springs; @synthesize eObjectManager; @synthesize iArray; @synthesize imagesArray; @synthesize iObjectManager; // Wain @synthesize tableDataList; // Laarme @synthesize contentArray; @synthesize dateFormatter1; // Dan @synthesize combinedModel; - (void)viewDidLoad { [super viewDidLoad]; // Do any additional setup after loading the view. [self configureRestKit]; [self loadMediaDan]; [self sortCombinedModel]; } - (void)didReceiveMemoryWarning { [super didReceiveMemoryWarning]; // Dispose of any resources that can be recreated. } - (void)configureRestKit { // API1 // initialize AFNetworking HTTPClient NSURL *baseURLE = [NSURL URLWithString:@"https://api.e.com"]; AFHTTPClient *clientE = [[AFHTTPClient alloc] initWithBaseURL:baseURLE]; // initialize RestKit RKObjectManager *eManager = [[RKObjectManager alloc] initWithHTTPClient:clientE]; self.eObjectManager = eManager; // setup object mappings RKObjectMapping *feedMapping = [RKObjectMapping mappingForClass:[Feed class]]; [feedMapping addAttributeMappingsFromArray:@[@"headline", @"premium", @"published", @"description"]]; RKObjectMapping *linksMapping = [RKObjectMapping mappingForClass:[Links class]]; RKObjectMapping *webMapping = [RKObjectMapping mappingForClass:[Web class]]; [webMapping addAttributeMappingsFromArray:@[@"href"]]; RKObjectMapping *mobileMapping = [RKObjectMapping mappingForClass:[Mobile class]]; [mobileMapping addAttributeMappingsFromArray:@[@"href"]]; RKObjectMapping *imagesMapping = [RKObjectMapping mappingForClass:[Images class]]; [imagesMapping addAttributeMappingsFromArray:@[@"height", @"width", @"url"]]; [feedMapping addPropertyMapping:[RKRelationshipMapping relationshipMappingFromKeyPath:@"links" toKeyPath:@"links" withMapping:linksMapping]]; [feedMapping addPropertyMapping:[RKRelationshipMapping relationshipMappingFromKeyPath:@"images" toKeyPath:@"images" withMapping:imagesMapping]]; [linksMapping addPropertyMapping:[RKRelationshipMapping relationshipMappingFromKeyPath:@"web" toKeyPath:@"web" withMapping:webMapping]]; [linksMapping addPropertyMapping:[RKRelationshipMapping relationshipMappingFromKeyPath:@"mobile" toKeyPath:@"mobile" withMapping:mobileMapping]]; // register mappings with the provider using a response descriptor RKResponseDescriptor *responseDescriptor = [RKResponseDescriptor responseDescriptorWithMapping:feedMapping method:RKRequestMethodGET pathPattern:nil keyPath:@"feed" statusCodes:[NSIndexSet indexSetWithIndex:200]]; [self.eObjectManager addResponseDescriptor:responseDescriptor]; // API2 // initialize AFNetworking HTTPClient NSURL *baseURLI = [NSURL URLWithString:@"https://api.i.com"]; AFHTTPClient *clientI = [[AFHTTPClient alloc] initWithBaseURL:baseURLI]; // initialize RestKit RKObjectManager *iManager = [[RKObjectManager alloc] initWithHTTPClient:clientI]; self.iObjectManager = iManager; // setup object mappings RKObjectMapping *dataMapping = [RKObjectMapping mappingForClass:[Data class]]; [dataMapping addAttributeMappingsFromArray:@[@"link", @"created_time"]]; RKObjectMapping *imagesMapping = [RKObjectMapping mappingForClass:[ImagesI class]]; [IMapping addAttributeMappingsFromArray:@[@""]]; RKObjectMapping *standardResolutionMapping = [RKObjectMapping mappingForClass:[StandardResolution class]]; [standardResolutionMapping addAttributeMappingsFromArray:@[@"url", @"width", @"height"]]; RKObjectMapping *captionMapping = [RKObjectMapping mappingForClass:[Caption class]]; [captionMapping addAttributeMappingsFromArray:@[@"text", @"created_time"]]; RKObjectMapping *userMapping = [RKObjectMapping mappingForClass:[User class]]; [userMapping addAttributeMappingsFromArray:@[@"username"]]; [dataMapping addPropertyMapping:[RKRelationshipMapping relationshipMappingFromKeyPath:@"images" toKeyPath:@"images" withMapping:imagesMapping]]; [imagesMapping addPropertyMapping:[RKRelationshipMapping relationshipMappingFromKeyPath:@"standard_resolution" toKeyPath:@"standard_resolution" withMapping:standardResolutionMapping]]; [dataMapping addPropertyMapping:[RKRelationshipMapping relationshipMappingFromKeyPath:@"caption" toKeyPath:@"caption" withMapping:captionMapping]]; [dataMapping addPropertyMapping:[RKRelationshipMapping relationshipMappingFromKeyPath:@"user" toKeyPath:@"user" withMapping:userMapping]]; // register mappings with the provider using a response descriptor RKResponseDescriptor *responseDescriptor2 = [RKResponseDescriptor responseDescriptorWithMapping:dataMapping method:RKRequestMethodGET pathPattern:nil keyPath:@"data" statusCodes:[NSIndexSet indexSetWithIndex:200]]; [self.iObjectManager addResponseDescriptor:responseDescriptor2]; } - (void)loadMedia { // Laarme contentArray = [[NSMutableArray alloc] init]; [self sortByDates]; // API1 NSString *apikey = @kCLIENTKEY; NSDictionary *queryParams = @{@"apikey" : apikey,}; [self.eObjectManager getObjectsAtPath:[NSString stringWithFormat:@"v1/n/?limit=4&leafs=%@&themes=%@", leafAbbreviation, themeID] // Changed limit to 4 for the time being parameters:queryParams success:^(RKObjectRequestOperation *operation, RKMappingResult *mappingResult) { hArray = mappingResult.array; [self.tableView reloadData]; } failure:^(RKObjectRequestOperation *operation, NSError *error) { NSLog(@"No?: %@", error); }]; // API2 [self.iObjectManager getObjectsAtPath:[NSString stringWithFormat:@"v1/u/2/m/recent/?client_id=e999"] parameters:nil success:^(RKObjectRequestOperation *operation, RKMappingResult *mappingResult) { iArray = mappingResult.array; [self.tableView reloadData]; } failure:^(RKObjectRequestOperation *operation, NSError *error) { NSLog(@"No: %@", error); }]; } // Laarme - (void)sortByDates { NSDateFormatter *dateFormatter2 = [[NSDateFormatter alloc] init]; //Do the dateFormatter settings, you may have to use 2 NSDateFormatters if the format is different for Data & Feed //The initialization of the dateFormatter is done before the block, because its alloc/init take some time, and you may have to declare it with "__block" //Since in your edit you do that and it seems it's the same format, just do @property (nonatomic, strong) NSDateFormatter dateFormatter; NSArray *sortedArray = [contentArray sortedArrayUsingComparator:^NSComparisonResult(id a, id b) { // Added Curly Braces around if else statements and used feedObject NSDate *aDate, *bDate; Feed *feedObject = (Feed *)a; Data *dataObject = (Data *)b; if ([a isKindOfClass:[Feed class]]) { //Feed *feedObject = (Feed *)a; aDate = [dateFormatter1 dateFromString:feedObject.published];} else { //if ([a isKindOfClass:[Data class]]) aDate = [dateFormatter2 dateFromString:dataObject.created_time];} if ([b isKindOfClass:[Feed class]]) { bDate = [dateFormatter1 dateFromString:feedObject.published];} else {//if ([b isKindOfClass:[Data class]]) bDate = [dateFormatter2 dateFromString:dataObject.created_time];} return [aDate compare:bDate]; }]; } #pragma mark - Table View - (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView { return 1; } - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { // API1 //return hArray.count; // API2 //return iArray.count; // API1 + API2 return hArray.count + iArray.count; } - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { UITableViewCell *cell; if(indexPath.row < hArray.count) { // Date Change NSDateFormatter *df = [[NSDateFormatter alloc] init]; [df setDateFormat:@"MMMM d, yyyy h:mma"]; // API 1 TableViewCell *api1Cell = [tableView dequeueReusableCellWithIdentifier:@"YourAPI1Cell"]; // Do everything you need to do with the api1Cell // Use the index in 'indexPath.row' to get the object from you array // API1 Feed *feedLocal = [hArray objectAtIndex:indexPath.row]; // API1 NSString *dateString = [self timeSincePublished:feedLocal.published]; NSString *headlineText = [NSString stringWithFormat:@"%@", feedLocal.headline]; NSString *descriptionText = [NSString stringWithFormat:@"%@", feedLocal.description]; NSString *premiumText = [NSString stringWithFormat:@"%@", feedLocal.premium]; api1Cell.labelHeadline.text = [NSString stringWithFormat:@"%@", headlineText]; api1Cell.labelPublished.text = dateString; api1Cell.labelDescription.text = descriptionText; // SDWebImage API1 if ([feedLocal.images count] == 0) { // Not sure anything needed here } else { Images *imageLocal = [feedLocal.images objectAtIndex:0]; NSString *imageURL = [NSString stringWithFormat:@"%@", imageLocal.url]; NSString *imageWith = [NSString stringWithFormat:@"%@", imageLocal.width]; NSString *imageHeight = [NSString stringWithFormat:@"%@", imageLocal.height]; __weak UITableViewCell *wcell = cell; [cell.imageView setImageWithURL:[NSURL URLWithString:imageURL] placeholderImage:[UIImage imageNamed:@"X"] completed:^(UIImage *image, NSError *error, SDImageCacheType cacheType) { // Something }]; } cell = api1Cell; } else { // Date Change NSDateFormatter *df = [[NSDateFormatter alloc] init]; [df setDateFormat:@"MMMM d, yyyy h:mma"]; // API 2 MRWebListTableViewCellTwo *api2Cell = [tableView dequeueReusableCellWithIdentifier:@"YourAPI2Cell"]; // Do everything you need to do with the api2Cell // Remember to use 'indexPath.row - hArray.count' as the index for getting an object for your second array // API 2 Data *dataLocal = [iArray objectAtIndex:indexPath.row - hArray.count]; // API 2 NSString *dateStringI = [self timeSincePublished:dataLocal.created_time]; NSString *captionTextI = [NSString stringWithFormat:@"%@", dataLocal.caption.text]; NSString *usernameI = [NSString stringWithFormat:@"%@", dataLocal.user.username]; api2Cell.labelHeadline.text = usernameI; api2Cell.labelDescription.text = captionTextI; api2Cell.labelPublished.text = dateStringI; // SDWebImage API 2 if ([dataLocal.images count] == 0) { NSLog(@"Images Count: %lu", (unsigned long)dataLocal.images.count); // Not sure anything needed here } else { ImagesI *imageLocalI = [dataLocal.images objectAtIndex:0]; StandardResolutionI *standardResolutionLocal = [imageLocalI.standard_resolution objectAtIndex:0]; NSString *imageURLI = [NSString stringWithFormat:@"%@", standardResolutionLocal.url]; NSString *imageWithI = [NSString stringWithFormat:@"%@", standardResolutionLocal.width]; NSString *imageHeightI = [NSString stringWithFormat:@"%@", standardResolutionLocal.height]; // 11.2 __weak UITableViewCell *wcell = cell; [cell.imageView setImageWithURL:[NSURL URLWithString:imageURLI] placeholderImage:[UIImage imageNamed:@"X"] completed:^(UIImage *image, NSError *error, SDImageCacheType cacheType) { // Something }]; } cell = api2Cell; } return cell; } Feed.h @property (nonatomic, strong) Links *links; @property (nonatomic, strong) NSString *headline; @property (nonatomic, strong) NSString *source; @property (nonatomic, strong) NSDate *published; @property (nonatomic, strong) NSString *description; @property (nonatomic, strong) NSString *premium; @property (nonatomic, strong) NSArray *images; Data.h @property (nonatomic, strong) NSString *link; @property (nonatomic, strong) NSDate *created_time; @property (nonatomic, strong) UserI *user; @property (nonatomic, strong) NSArray *images; @property (nonatomic, strong) CaptionI *caption;

    Read the article

  • Android: Resolution issue while saving image into gallery

    - by Luca D'Amico
    I've managed to programmatically take a photo from the camera, then display it on an imageview, and then after pressing a button, saving it on the Gallery. It works, but the problem is that the saved photo are low resolution.. WHY?! I took the photo with this code: Intent cameraIntent = new Intent(android.provider.MediaStore.ACTION_IMAGE_CAPTURE); startActivityForResult(cameraIntent, CAMERA_PIC_REQUEST); Then I save the photo on a var using : protected void onActivityResult(int requestCode, int resultCode, Intent data) { if (requestCode == CAMERA_PIC_REQUEST) { thumbnail = (Bitmap) data.getExtras().get("data"); } } and then after displaying it on an imageview, I save it on the gallery with this function: public void SavePicToGallery(Bitmap picToSave, File savePath){ String JPEG_FILE_PREFIX= "PIC"; String JPEG_FILE_SUFFIX= ".JPG"; String timeStamp = new SimpleDateFormat("yyyyMMdd_HHmmss").format(new Date()); String imageFileName = JPEG_FILE_PREFIX + timeStamp + "_"; File filePath = null; try { filePath = File.createTempFile(imageFileName, JPEG_FILE_SUFFIX, savePath); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } FileOutputStream out = null; try { out = new FileOutputStream(filePath); } catch (FileNotFoundException e) { // TODO Auto-generated catch block e.printStackTrace(); } picToSave.compress(CompressFormat.JPEG, 100, out); try { out.flush(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } try { out.close(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } //Add the pic to Android Gallery String mCurrentPhotoPath = filePath.getAbsolutePath(); MediaScannerConnection.scanFile(this, new String[] { mCurrentPhotoPath }, null, new MediaScannerConnection.OnScanCompletedListener() { public void onScanCompleted(String path, Uri uri) { } }); } I really can't figure out why it lose so much quality while saved.. Any help please ? Thanks..

    Read the article

  • Help! Getting an error copying the data from one column to the same column in a similar recordset..

    - by Mike D
    I have a routine which reads one recordset, and adds/updates rows in a similar recordset. The routine starts off by copying the columns to a new recordset: Here's the code for creating the new recordset.. For X = 1 To aRS.Fields.Count mRS.Fields.Append aRS.Fields(X - 1).Name, aRS.Fields(X - 1).Type, aRS.Fields(X - _ 1).DefinedSize, aRS.Fields(X - 1).Attributes Next X Pretty straight forward. Notice the copying of the name, Type, DefinedSize & Attributes... Further down in the code, (and there's nothing that modifies any of the columns between.. ) I'm copying the values of a row to a row in the new recordset as such: For C = 1 To aRS.Fields.Count mRS.Fields(C - 1) = aRS.Fields(C - 1) Next C When it gets to the last column which is a numeric, it craps with the "Mutliple-Step Operation Generated an error" message. I know that MS says this is an error generated by the provider, which in this case is ADO 2.8. There is no open connect to the DB at this point in time either. I'm pulling what little hair I have left over this one... (and I don't really care at this point that the column index is 'X' in one loop & 'C' in the other... I'll change it later when I get the real problem fixed...)

    Read the article

< Previous Page | 339 340 341 342 343 344 345 346 347 348 349 350  | Next Page >