Feeding shared pages of intererrest

Author Message
Dick 09/09/2012 01:35 pm
I use my phone, tablet and PC for reading/searching the web. The only common way to share a page when I read something interesting, is something like 'send this link'.

I collect the links on my laptop. I would like to feed those links into OE, so I can create a kind of personal web on my PC with only those interesting pages. I like to download them though, like OE allows me. So bookmarks won't do, one reason is local search on my pages of interest.

I have (automatically) created new project import files, one for every link of interest, based on the 'download one page' template.

How can I import those 'new projects', just one by one?When using file/new/load form text file, there seems no wildcard * for the file name or selection of all files.
I'd also appreciate a line for specifying the project folder name in the new project text file import.

Or is there a different way to accomplish these links to interesting pages?

(sorry if I posted this twice)
Oleg Chernavin 09/09/2012 01:39 pm
I think, the best way would be to combine all these text files to one. To create a new folder, add the following lines at the top of the text file for import:

[Object]
Type=1
Text=Folder name

You may also copy the URLs to clipboard (one per line) and paste using Ctrl+Shift+V - this creates separate Projects per URL. You may later select these new Projects and Apply the desired Template to them.

Best regards,
Oleg Chernavin
MP Staff
Dick 09/14/2012 05:23 am
Thanks, that worked for me. I decide not to use the the Project input since it created a new project with the same name for every addition. I decided to just click on the project and then import file with the urls/templates.

I have included the program that converts shared link emails to either a URL list or a concatenated template file (which I use). It's written in Macro Scheduler though.

==== source (Macro Scheduler 'language')
//Set IGNORESPACES to 1 to force script interpreter to ignore spaces.
//If using IGNORESPACES quote strings in {" ... "}
//Let>IGNORESPACES=1

// file names, structure my documents/1webDoc (mainDir)
// inDir (todo)
// outDir (OEinp)
// change directory names here
Let>mainDir=1webDoc
Let>inDir=0a todo
Let>outDir=0b oeInp
let>folderInOEproject=WebDoc
let>urlListFn=allUrls.txt

// make full expanded ndir and file names, starting with disk drive (of my documents)
let>myDocDirFull=%USERDOCUMENTS_DIR%\
let>dirMainFull=%myDocDirFull%%mainDir%\
Let>inDirFull=%dirMainFull%%inDir%\
Let>outDirFull=%dirMainFull%%outDir%\
Let>templateFn=%dirMainFull%export.txt

// make file full date string in,ocal format dd-mmm-yyyytHH.mm
Day>dayMy
Month>monthMy
Year>yearMy
Hour>hourMy
Min>minMy
Let>dateTime=%yearMy%-%monthMy%-%dayMy%t%hourMy%.%minMy%

// make output file names based in full date: urllist and OE tamplate based output
// collection of all URLs only, one per line allUrls2012-09-13t14.20.txt
// input for OE oeIn2012-09-13t14.20.txt
Let>oeInFn=oeIn%dateTime%.txt
Let>urlListFn=allUrls%dateTime%.txt

// write header lines in new OE input file. No creates new folder (not used currently)
Let>WLN_NOCRLF=0
//Let>MSG_STAYONTOP=1
//MessageModal>%outDirFull%%oeInFn%
// WriteLn>%outDirFull%%oeInFn%,result,[Object]
// WriteLn>%outDirFull%%oeInFn%,result,Type=1
// WriteLn>%outDirFull%%oeInFn%,result,Text=%folderInOEproject%

// start processing all HTML/text files containing the URLS to be collecetd

// get and seperate list of file names in cur directory
Let>GFL_TYPE=0
GetFileList>%inDirFull%*.*,fielList
Separate>fielList,;,fileNames
Let>fnum=0

// for ever file do
While>fnum<fileNames_count
Let>fnum=fnum+1
// ger next file name and read email contants with url
Let>fNameCur=fileNames_%fnum%

ExtractFileName>fNameCur,projName
ReadFile>fNameCur,mailBody
// find url, check html url
Position>href="http,mailBody,1,urlBeg,FALSE
// is it a plain text
if>urlBeg=0
// yes, its a plain text, just take first http until EOL
Position>http://,mailBody,1,urlBeg,FALSE
// but if no HTTP, indecate **NO URL FOUND
if>urlBeg=0
Let>urlLen=0
let>projUrl=**No URL Found
else>
Position>CR,mailBody,urlBeg,urlEnd,TRUE
Let>urlLen=urlEnd
MidStr>mailBody,urlBeg,urlLen,projUrl
endif>
else>
// yes, it's an HTML file. Find end of HTML mark href="http:urlstring">, dont take clsing quote
let>urlBeg=urlBeg+6
Position>",mailBody,urlBeg,urlEnd,TRUE
let>urlLen=urlEnd-1
MidStr>mailBody,urlBeg,urlLen,projUrl
endif>
// write plain URL's to collect file
WriteLn>%outDirFull%%urlListFn%,result,projUrl
// now start prcessing collection of templates with this URL
//read template and replace projname and url name
ReadFile>templateFn,content
StringReplace>content,**projName,projName,contentNew
StringReplace>contentNew,**projURL,projUrl,contentNew2
//StringReplace>outDirFull,**projURL,projUrl,contentNew2
//Let>MSG_STAYONTOP=1
//MessageModal>%outDirFull%%projName%
// and append template to other templates collected so far
Let>WLN_NOCRLF=0
WriteLn>%outDirFull%%oeInFn%,result,contentNew2
EndWhile