PBR Displacement Advice

Started by WAS, July 04, 2019, 03:44:09 AM

Previous topic - Next topic

KirillK

#15
Quote from: WASasquatch on July 04, 2019, 06:26:38 PM
respectively Photogrammetry Based Material and Time of Flight Based Material

I am actually thinking of picking up a Hawuai(sp) P30 phone because it has a 5 meter ToF sensor that works with several AR/Photogrammetry apps. My LG G8 has a ToF sensor, but it's much smaller, has maybe a quarter meter full resolution distance, and has no API hook.


I wonder if anyone could  really get any good material from  TOF  photogrammetry.   I always thought it was pretty low precision thing.    Is it really ready for material capture? What soft is working with that? 

  I do use Reality Capture and Agisoft Metashape and they  are both able to produce super hi-res meshes.  Hundreds of millions polygons up to smallest tiniest surface details like asphalt grains or veins in fallen leaves on the ground  for example. With good enough camera and proper captured series at least.     It takes hours to calculate although and  baking it into displacement map is whole separate problem.   

So I wonder if ToF is really useful by now already?

WAS

#16
Quote from: KirillK on July 05, 2019, 05:40:11 AM
Quote from: WASasquatch on July 04, 2019, 06:26:38 PM
respectively Photogrammetry Based Material and Time of Flight Based Material

I am actually thinking of picking up a Hawuai(sp) P30 phone because it has a 5 meter ToF sensor that works with several AR/Photogrammetry apps. My LG G8 has a ToF sensor, but it's much smaller, has maybe a quarter meter full resolution distance, and has no API hook.


I wonder if anyone could  really get any good material from  TOF  photogrammetry.   I always thought it was pretty low precision thing.    Is it really ready for material capture? What soft is working with that?

  I do use Reality Capture and Agisoft Metashape and they  are both able to produce super hi-res meshes.  Hundreds of millions polygons up to smallest tiniest surface details like asphalt grains or veins in fallen leaves on the ground  for example. With good enough camera and proper captured series at least.     It takes hours to calculate although and  baking it into displacement map is whole separate problem.   

So I wonder if ToF is really useful by now already?

I don't understand? ToF is already replacing most antiquated 3D scanning technology because it operates much faster. Most modern LIDAR is TOF.  As for resolution, it's used to accurately map human faces for facial recognition down to freckles and facial pock marks. Additionally my LG can recognition the veins in my palm and it's print. Only mention I see about resolution is from 2011.

The Nokia 9 uses a ToF for it's DoF and encodes it's Depth map into it's JPEGs (https://imgur.com/AqvW2lY) at 16bit depth (16x16 pixel shading).

This may be where some of the more commercial options are getting there resolution boosts from: https://ieeexplore.ieee.org/document/7335463 This was from 2015 well before my ToF was developed as well as others in phones today.

Raw demonstration footage from LUCID (developed 2016-2017) shows pretty tightly packed point data that can be approximated easily:


I think with how fast it can process over traditional laser scanning, and just like laser scanning approximates, ToF scanning will be able to process more quicker, getting you to a final product quicker, as computation of the data is PC/Mac reliant it can happen live as you collect data.

KirillK

#17
Quote from: WASasquatch on July 05, 2019, 05:55:10 AM

As for resolution, it's used to accurately map human faces for facial recognition down to freckles and facial pock marks. Additionally my LG can recognition the veins in my palm and it's print. Only mention I see about resolution is from 2011.



Could you show what you are able to do with TOF material wise please?   A render maybe?  With actual reconstructed surface .

That  "hand" gif  clearly shows  low precision with all those random bumpiness even having very dense point cloud.         Could it really make recognizable veins and pocks?     

As of LIDAR scanners they mostly use point by point laser beam scanning with rotating mirror, at least ones I saw  and If I am not wrong TOF is just one shot



I mean can ToF cameras could do something like this:   https://www.dropbox.com/s/xcsehxleves6ob7/dirt.jpg?dl=0

I am not trying to question ToF  technology  perspectives   just trying to figure out if it's  useful right now

WAS

#18
Quote from: KirillK on July 05, 2019, 07:14:06 AM
Quote from: WASasquatch on July 05, 2019, 05:55:10 AM

As for resolution, it's used to accurately map human faces for facial recognition down to freckles and facial pock marks. Additionally my LG can recognition the veins in my palm and it's print. Only mention I see about resolution is from 2011.



Could you show what you are able to do with TOF material wise please?   A render maybe?  With actual reconstructed surface .

That  "hand" gif  clearly shows  low precision with all those random bumpiness even having very dense point cloud.         Could it really make recognizable veins and pocks?     

As of LIDAR scanners they mostly use point by point laser beam scanning with rotating mirror, at least ones I saw  and If I am not wrong TOF is just one shot

Honestly not sure how you can't do recognition that one frame of a TOF has more point data than a linear laser scan to be used for point data...

I mean can ToF cameras could do something like this:   https://www.dropbox.com/s/xcsehxleves6ob7/dirt.jpg?dl=0

I am not trying to question ToF  technology  perspectives   just trying to figure out if it's  useful right now

I don't think you understand how point data is collected and used for resolution... A live video feed from ToF isn't 180 passes a second fed into a algorithm rebuilding a material. It's also in a fixed position, not scanning. The hand is only slightly moving. But in reality far more data in one frame of the video than a linear laser scans beam. Than multiplied exponentially at 180 shots a second.

As already stated, I am looking into getting a Huawei with a TOF that GAPI can hook into. The LG TOF is proprietary. If you want to see the results browse sketchfab and the many Huawei p30 pro photogrammetry materials that ARCore and 3DScan use the TOF.  Most photogrammetry on that website is from phones more than scanned by a company (rarer).

And not sure why you are arguing this as it's no secret the resolution is strong enough to create biorecognition prints down to skim imperfect and veins below the surface, all in a less than a second or two for completion of the task to gather direct field data.

Also, all scanning methods outside a professional expensive 3D scanner require editing to fix mesh issues. Even with standard photogrammetry techniques you'll need to gather all this data manually, which is why TOFs are starting to be integrated into software like ARCore, as it aids in this process exponentially by providing depth sensoring outside of photo processing and stitching to approximate depth.


KirillK

#19
Quote from: WASasquatch on July 05, 2019, 02:03:15 PM


I don't think you understand how point data is collected and used for resolution... A live video feed from ToF isn't 180 passes a second fed into a algorithm rebuilding a material. It's also in a fixed position, not scanning. The hand is only slightly moving. But in reality far more data in one frame of the video than a linear laser scans beam. Than multiplied exponentially at 180 shots a second.

As already stated, I am looking into getting a Huawei with a TOF that GAPI can hook into. The LG TOF is proprietary. If you want to see the results browse sketchfab and the many Huawei p30 pro photogrammetry materials that ARCore and 3DScan use the TOF.  Most photogrammetry on that website is from phones more than scanned by a company (rarer).

And not sure why you are arguing this as it's no secret the resolution is strong enough to create biorecognition prints down to skim imperfect and veins below the surface, all in a less than a second or two for completion of the task to gather direct field data.

Also, all scanning methods outside a professional expensive 3D scanner require editing to fix mesh issues. Even with standard photogrammetry techniques you'll need to gather all this data manually, which is why TOFs are starting to be integrated into software like ARCore, as it aids in this process exponentially by providing depth sensoring outside of photo processing and stitching to approximate depth.



I can admit I don't understand, as many of what you are saying too , sorry.     And I am not arguing I just want to see a surface reconstructed with TOF camera.     I couldn't  find anything from Huawai  on Sketchfab or anywhere.   

However I was  able to find Nokia 9 depth maps  here https://onedrive.live.com/?authkey=%21AOmSLg2vqdKCpqc&id=6FF84EEABE79B8A6%215226&cid=6FF84EEABE79B8A6
Someone posted them and what I see is not even close to regular photogrammetry .   They merely enough for DOF effect  and that's all.   

You are  saying  there is an algorithm rebuilding a material , good enough for veins and skin imperfections?    Could you post a link please, a picture something?  Is TOF photogrammetry  not ready yet and it's only theoretical possibility?

Sorry I don't understand what data  do you mean with standard photogrammetry techniques.    Usually it's just photo series.
Here is a concrete block of 34 millions  triangles   in Reality Capture.   No much of manual mesh editing required  if you shoot it right way.     

WAS

#20
Quote from: KirillK on July 05, 2019, 06:44:40 PM
Quote from: WASasquatch on July 05, 2019, 02:03:15 PM


I don't think you understand how point data is collected and used for resolution... A live video feed from ToF isn't 180 passes a second fed into a algorithm rebuilding a material. It's also in a fixed position, not scanning. The hand is only slightly moving. But in reality far more data in one frame of the video than a linear laser scans beam. Than multiplied exponentially at 180 shots a second.

As already stated, I am looking into getting a Huawei with a TOF that GAPI can hook into. The LG TOF is proprietary. If you want to see the results browse sketchfab and the many Huawei p30 pro photogrammetry materials that ARCore and 3DScan use the TOF.  Most photogrammetry on that website is from phones more than scanned by a company (rarer).

And not sure why you are arguing this as it's no secret the resolution is strong enough to create biorecognition prints down to skim imperfect and veins below the surface, all in a less than a second or two for completion of the task to gather direct field data.

Also, all scanning methods outside a professional expensive 3D scanner require editing to fix mesh issues. Even with standard photogrammetry techniques you'll need to gather all this data manually, which is why TOFs are starting to be integrated into software like ARCore, as it aids in this process exponentially by providing depth sensoring outside of photo processing and stitching to approximate depth.



I can admit I don't understand, as many of what you are saying too , sorry.     And I am not arguing I just want to see a surface reconstructed with TOF camera.     I couldn't  find anything from Huawai  on Sketchfab or anywhere.   

However I was  able to find Nokia 9 depth maps  here https://onedrive.live.com/?authkey=%21AOmSLg2vqdKCpqc&id=6FF84EEABE79B8A6%215226&cid=6FF84EEABE79B8A6
Someone posted them and what I see is not even close to regular photogrammetry .   They merely enough for DOF effect  and that's all.   

You are  saying  there is an algorithm rebuilding a material , good enough for veins and skin imperfections?    Could you post a link please, a picture something?  Is TOF photogrammetry  not ready yet and it's only theoretical possibility?

Sorry I don't understand what data  do you mean with standard photogrammetry techniques.    Usually it's just photo series.
Here is a concrete block of 34 millions  triangles   in Reality Capture.   No much of manual mesh editing required  if you shoot it right way.   

Granted the field is just starting to open up to the average consumer and even than barely. The API to hook to rear facing TOF sensors just came out this year, but it's been a hot topic that isn't hard to find lots of discussion on even comparison between all the formats from.yeara ago, I'll try to find a link to that. But here is some other stuff. As for sketchfab you won't really know unless they tell you it was taken on a supporting phone. And yes, as already noted the note 9 is used for DOF.

https://www.androidauthority.com/lg-g8-thinq-vein-recognition-956358/amp/
https://www.businessinsider.com/lg-g8-smartphone-unlocks-with-hand-id-vein-palm-recognition-2019-2
What's shown here is proprietary as well: https://www.laserfocusworld.com/detectors-imaging/article/16555309/facial-recognition-3d-tof-camera-technology-improves-facial-recognition-accuracy-and-security
Here is a old comparison between scanning types I was talking about, first being tof: https://www.researchgate.net/figure/Experimental-results-for-small-objects-at-a-distance-under-outdoor-illumination-a-Top_fig8_316026814
And https://www.researchgate.net/figure/Comparison-of-time-of-flight-ToF-and-photometric-stereo-methods-a-shows-the-target_fig1_323592356

That's all I'm really going to post on the subject matter. I've done enough digging myself to know it's a pioneering field, especially with mixing the fields like in third link above.

And I'm not sure what keywords you are using when researching but there is a whole lot of TOF and environment/object scanning.

And I'm not here to prove why I'd like to get involved with something new and get a sensor to fiddle with the API and ARCore with full features.

KirillK

#21
Thanks a lot WASasquatch,   I just got a wrong impression that ToF photogrammetry is kind of available already  with new phone generation specifically.   

A general idea  of why a ToF sensors wouldn't be useful for 3d material reconstruction have been crossing my mind for more than decade already.  But each time I tried to find any new achievements regarding this   it's always  just something smooth shaped and not detailed enough.

  Even this LG veins recognition  seems not exactly shape/surface reconstruction  but rather a picture of "infrared absorption" as they say it.  So not actually a time of flight but rather just using its infrared source to receive back what palm skin is absorbing.    After all a palm surface have no prominent veins at all.     
   
     I hope probably it's just a field that haven't found much of a focus  yet.  I mean tiny surface details shaping and reconstruction.  Maybe it's rather software and processing necessary that are not  for phones or mass market interests outside of  the area, not ToF technology limitations.   

But I am looking forward for  this too.   It's something promising indeed

WAS

Quote from: KirillK on July 06, 2019, 06:24:03 AM
Thanks a lot WASasquatch,   I just got a wrong impression that ToF photogrammetry is kind of available already  with new phone generation specifically.   

A general idea  of why a ToF sensors wouldn't be useful for 3d material reconstruction have been crossing my mind for more than decade already.  But each time I tried to find any new achievements regarding this   it's always  just something smooth shaped and not detailed enough.

  Even this LG veins recognition  seems not exactly shape/surface reconstruction  but rather a picture of "infrared absorption" as they say it.  So not actually a time of flight but rather just using its infrared source to receive back what palm skin is absorbing.    After all a palm surface have no prominent veins at all.     
   
     I hope probably it's just a field that haven't found much of a focus  yet.  I mean tiny surface details shaping and reconstruction.  Maybe it's rather software and processing necessary that are not  for phones or mass market interests outside of  the area, not ToF technology limitations.   

But I am looking forward for  this too.   It's something promising indeed

Did you read the articles? Your idea of resolution is not part of the final image, even in the facial recognition software, what it reads in a second is far more accurate and detailed that what you will get in minutes of scanning all angles of someone's face. Fed into algorithms (just like laser scanning too) we can generate highly accurate depth maps. Surface detail isn't even recorded with consumer laser scanning so not sure what your gripe is there.

And yes, as of THIS year, TOF sensors are starting to be seen in phones, as mentioned several times, the API to even use those sensors through Android were also just released. Apple has none yet.

All the reasons why it is good for depth sensing (and why it is a depth sensor) and becoming a hot tech field is also pretty easy to see. Inherently you can gather 100x the data in a second than a scanner.... A second. Not minutes of scanning every possible surface angle so the laser isn't confused by any refraction or angles. A TOF had much better diffusion and refraction recognition than direct bouncing lasers.

KirillK

#23
I mostly compare it not with what  point by point LIDARs  do but rather with regular photogrammetry done from a parallax in series of photos  without any laser at all.     New  50mpix  cameras photo series  could produce unbelievably detailed surface   having almost zero noise errors.    I use 19mpix Foveon matrix camera and it makes super crispy geometry up to tiny pores and small cracks.  Sometimes 300-500 millions of triangles per square meter.
Disadvantages it takes lots of time, RAM, cpu/gpu power, needs a monster of a PC and works fine with static subjects only obviously.

I read the articles but not a single one demonstrates even what a typical LIDAR scanner could do with enough time for processing  at close 2-3 meters distance .   Leicas for example.  ( with static subjects too obviously)   

  Anything I see is just more or less low-res human face, probably enough for face recognition, but imo not enough for quality material displacement  maps.

So I wonder if it's something ToF cameras couldn't do or such low res shape is a result of real-time capturing  with not enough scan iterations, processing time  or something ?   

And why ToFs are small resolution 400x300pix  or something.    Could we expect them in quality pro cameras , not cell phones only?

WAS

#24
See this is where you're missing the point. Both collect point data... one does it astronomically faster... You said it yourself.

Quote from: KirillKI read the articles but not a single one demonstrates even what a typical LIDAR scanner could do with enough time for processing  at close 1-2 meters distance .   Leicas for example.  ( with static subjects too obviously) 





Where this can be achieved in a mere second just for facial recognition from one fixed position.... The camera collects a point cloud of the face in real-time and the CPU (of a flipping phone for crying out loud) calculates it in real time to a 3D depth map of a persons face. And than compares. I don't understand how you don't see the potential here, really don't. Lol These cameras can be uses for longer than a second and moved around, using spatial awareness of phones gyroscopes too. When emission distance is more than a hobbyists consumer field, it will be picked up and rigged into handheld and tethered scanners, trust me. Lol

I don't think you're looking deep enough at what it offers. When you're up close and personal with objects you can achieve a lot, and the TOF sensor range is only increasing. Phones alone achieving a max resolution at 5m (from like 1m just a year or two ago). That's pretty far for consumer scanning....

Alone most consumer 3D scanners using lasers have a quantum efficiency of less than 25% at 0.7mm scale (which is why most models are puddy-like). The facial recognition demonstration I linked alone has a quantum efficiency of 50% at 0.13mm.

No offense but I don't think you're comprehending the data between the formats. Here is document from thor, a popular consumer 3d scanner, and it's of substantially far less quality... http://thor3dscanner.com/what-is-%E2%80%9Cresolution%E2%80%9D-in-a-3d-scanner-and-why-is-it-important

I think you're glossing through and taking image examples as your proof of it's limits and not what it's demonstrating.

And your image exmaple above is extremely bias because it is also baked with approximation based normal mapping for it's roughing, and also incorporates it's texture for the illusion of detail for a final product.

KirillK

#25
   My goal is possible material Displacement / normal map  capturing .  Real time speed is good indeed but only for living things and I mostly interested in static subjects. 
  I  had an experience only with super expensive Leica LIDAR a company I work for using.      And even it couldn't make anything close to Reality Capture photogrammetry except the fact LIDAR make more accurate macro shape and traditional photogrammetry while doing super cool tiny details makes kind of macro errors sometimes.  Something flat  may be slightly bent and so on.

I guess for material capture it rather important not how far away sensor could see  but rather how much depth gradations it could record.  I bet  further it see  more stepped it might be.     Nokia 9 makes 1200 depth steps/layers  as what I read from one of your links. It's not that much actually

Here is that block in Zbrush  , And it's optimized to 19 mil.   Done from old Nikon not very hi res camera. Could be much more detailed surface actually.   No normal map. It's actually a source to bake normal map from

Again I am not trying to question the virtues of ToF approach.    Something real time and working at the same time you do shots is super cool indeed.   
I am just trying to figure out if the approach could give close level of surface accuracy and geometry crispness.  Enough to bake normal maps from.     Even if not I bet it still could be useful with micro surface details added by means of crazy bump or something.
I just want to see something to compare  not as ugly as Nokia9 examples that  I found.

WAS

#26
Quote from: KirillK on July 06, 2019, 05:37:20 PM
   My goal is possible material Displacement / normal map  capturing .  Real time speed is good indeed but only for living things and I mostly interested in static subjects. 
  I  had an experience only with super expensive Leica LIDAR a company I work for using.      And even it couldn't make anything close to Reality Capture photogrammetry except the fact LIDAR make more accurate macro shape and traditional photogrammetry while doing super cool tiny details makes kind of macro errors sometimes.  Something flat  may be slightly bent and so on.

I guess for material capture it rather important not how far away sensor could see  but rather how much depth gradations it could record.  I bet  further it see  more stepped it might be.     Nokia 9 makes 1200 depth steps/layers  as what I read from one of your links. It's not that much actually

Here is that block in Zbrush  , And it's optimized to 19 mil.   Done from old Nikon not very hi res camera. Could be much more detailed surface actually.   No normal map. It's actually a source to bake normal map from

Again I am not trying to question the virtues of ToF approach.    Something real time and working at the same time you do shots is super cool indeed.   
I am just trying to figure out if the approach could give close level of surface accuracy and geometry crispness.  Enough to bake normal maps from.     Even if not I bet it still could be useful with micro surface details added by means of crazy bump or something.
I just want to see something to compare  not as ugly as Nokia9 examples that  I found.

You're really stuck on Nokia 9 which has been already noted, and in it's own article... used for DOF recognition. All it's doing on the note 9 is double checking and correcting the depth approximated fro the camera array so there aren't any errors from image-based approximation as the TOF can tell it "No, that's actually in the background/foreground".

And as I've explained, this is used right now as a AID to photogrammetry. Even on the Huwaei it's used as a aid for the ARCore's spacial recognition and dpeth sensing. Making sure what is applied through imaging is accurate (much like the Note 9). The API was just released, like I mentioned several times (getting old now, at this point it's just ignorance), heck we haven't even seen ANY official TOF based AR applications released yet that were targeted for mid-2019.

You need to cool your jets, either appreciate a new field, or move on. Lol I know the potential here as been gone over various places. How you can't see how an accurate face map done in a second as apposed to 2-5 minutes of scanning all angles and re-scanning errors is not outstanding and opening up a whole new avenue is beyond me. At this point it must just be arrogance/ignorance.

I'm not here to prove anything, again, or why I want to tinker with the Google TOF API and look forward to applications to calculate point data outside of proprietary tech like facial recognition, or even just play with AR. Really not. And to have an argument about it because you're obsessed with antiquated techniques that are broken down in a article I shared and why mixing these technologies is the future.

And how you are still caught up on detail is still beyond me something improving day and night, with almost a x10 gain in resolution in a year... If the TOF in it's raw basic form on the LG can distinguish veins below the surface, and recognize a face down to moles and freckles, it already has a lot of detail. For example my fiancee cannot unlock her phone with foundation on as she covers her beauty marks that the software is specifically using as a unique identifier.

In general from looking at the export of the mans face as a depth map alone, I can tell it's reading unprecedented amount of surface detail in a second, without scanning. I'm sorry you can't. Even with a blurred and highly compressed JPEG. And to note a depth map is not a displacement map, depth maps are intentionally smooth

When the field opens up, I'd love to show you all it can do (even though It's been doing it for awhile such as the article I gave you on object scanning and it's use alone, plus with other mediums for accuracy from years ago, and pretty good for such low resolution)

Also, it still seems that object is using normal mapping from the distortion of detail by angle. And I'm going to assume it's not scanned with 15k scanner, and thus that surface detail is likely approximated from images. Almost all consumer scanners for hobbyists use a lot of approximations, even in mesh building, but the detail is all from images. The resolution of most scanners out today is LESS than the TOFs we have covered (again the Thor has a quantum efficiency of less than 25% at 0.7mm scale ) that's HALF the resolution of the TOF I compared too. And that's an expensive scanner. The meshes were pretty puddy-like without approximating detail from images.

WAS

#27
Since you're still caught up on mesh examples, here is a mesh example from last year form Sony's TOF.  Mind you, again, this is again, a mesh created from only a second or two of emission. Not scanning from all angles over the course of minutes+

https://www.unifore.net/product-highlights/sony-released-3d-bsi-tof-image-sensor-imx456ql.html

Again, remember, this is only a mesh based on depth, depth maps do not incorporate surface detail as that would interfere with meshing (just like 3d scanners).

Surface detail isn't really a concern as as I've mentioned, for most of us, this is done via approximation, not actual lasers scanning every bit of surface detail. That's out of most consumers reach and usually actually CT scanning, not laser scanning. Like the 3D scan of Nefertiti's head was done with a hand-held CT scanner, not a 3D scanner, to achieve the actual skull and skin detail needed for reconstruction.

Oshyan

Guys, this discussion has moved out of the realm of productive and friendly discourse. WAS, Kirill appears to simply not see what you're seeing, but to call him arrogant or ignorant is needlessly inflammatory. The fact that you have yet to provide a link to a specific, high-detail model created by ToF (no, the Sony example doesn't cut it) would seem to indicate that it is still not ready for "prime time". Maybe you're right and it will prove itself very soon, but wait until it does and then you can demonstrate clearly. In the mean time accept that you have a difference of perspective and move on.

- Oshyan

WAS

#29
Quote from: Oshyan on July 07, 2019, 02:17:34 PM
Guys, this discussion has moved out of the realm of productive and friendly discourse. WAS, Kirill appears to simply not see what you're seeing, but to call him arrogant or ignorant is needlessly inflammatory. The fact that you have yet to provide a link to a specific, high-detail model created by ToF (no, the Sony example doesn't cut it) would seem to indicate that it is still not ready for "prime time". Maybe you're right and it will prove itself very soon, but wait until it does and then you can demonstrate clearly. In the mean time accept that you have a difference of perspective and move on.

- Oshyan

Imo it's due. To be going around like a record simply asserts that definition.

And it's funny how many times I've noted this is a brand new field I want to get involved with, not a standardized field, or even one that's STARTED.

And yes, it does cut it Oshyan. What is your excuse to refute that? It is a highly accurate 3D mesh created from a second of exposure. Please don't play arrogant as well. These are the steps any of the fields have taken in their infancy, and this shows unprecedented speed and accuracy over 3D scanning. That's just fucking inherent Oshyan, both in science and testing application. It's not even something to argue over. It's there, and proves itself. The fact you aren't familiar with what it takes to create a mesh or 3D model doesn't refute any of this.